32 resultados para parabolic-elliptic equation, inverse problems, factorization method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel hybrid (or multiphysics) algorithm, which couples pore-scale and Darcy descriptions of two-phase flow in porous media. The flow at the pore-scale is described by the Navier?Stokes equations, and the Volume of Fluid (VOF) method is used to model the evolution of the fluid?fluid interface. An extension of the Multiscale Finite Volume (MsFV) method is employed to construct the Darcy-scale problem. First, a set of local interpolators for pressure and velocity is constructed by solving the Navier?Stokes equations; then, a coarse mass-conservation problem is constructed by averaging the pore-scale velocity over the cells of a coarse grid, which act as control volumes; finally, a conservative pore-scale velocity field is reconstructed and used to advect the fluid?fluid interface. The method relies on the localization assumptions used to compute the interpolators (which are quite straightforward extensions of the standard MsFV) and on the postulate that the coarse-scale fluxes are proportional to the coarse-pressure differences. By numerical simulations of two-phase problems, we demonstrate that these assumptions provide hybrid solutions that are in good agreement with reference pore-scale solutions and are able to model the transition from stable to unstable flow regimes. Our hybrid method can naturally take advantage of several adaptive strategies and allows considering pore-scale fluxes only in some regions, while Darcy fluxes are used in the rest of the domain. Moreover, since the method relies on the assumption that the relationship between coarse-scale fluxes and pressure differences is local, it can be used as a numerical tool to investigate the limits of validity of Darcy's law and to understand the link between pore-scale quantities and their corresponding Darcy-scale variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce an algebraic operator framework to study discounted penalty functions in renewal risk models. For inter-arrival and claim size distributions with rational Laplace transform, the usual integral equation is transformed into a boundary value problem, which is solved by symbolic techniques. The factorization of the differential operator can be lifted to the level of boundary value problems, amounting to iteratively solving first-order problems. This leads to an explicit expression for the Gerber-Shiu function in terms of the penalty function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevention programs in adolescence are particularly effective if they target homogeneous risk groups of adolescents who share a combination of particular needs and problems. The present work aims to identify and classify risky single-occasion drinking (RSOD) adolescents according to their motivation to engage in drinking. An easy-to-use coding procedure was developed. It was validated by means of cluster analyses and structural equation modeling based on two randomly selected subsamples of a nationally representative sample of 2,449 12- to 18-year-old RSOD students in Switzerland. Results revealed that the coding procedure classified RSOD adolescents as either enhancement drinkers or coping drinkers. The high concordance (Sample A: kappa - .88, Sample B: kappa - .90) with the results of the cluster analyses demonstrated the convergent validity of the coding classification. The fact that enhancement drinkers in both subsamples were found to go out more frequently in the evenings and to have more satisfactory social relationships, as well as a higher proportion of drinking peers and a lower likelihood to drink at home than coping drinkers demonstrates the concurrent validity of the classification. To conclude, the coding procedure appears to be a valid, reliable, and easy-to-use tool that can help better adapt prevention activities to adolescent risky drinking motives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The sensitivity and tolerance regarding ADHD symptoms obviously differ from one culture to another and according to the informants (parents, teachers, or children). This stimulates the comparison of data across informants and countries. METHOD: Parents and teachers of more than 1,000 school-aged Swiss children (5 to 17 years old) fill in Conners's questionnaires on ADHD. Children who are older than 10 years old also fill in a self-report questionnaire. Results are compared to data from a North American sample. RESULTS: Swiss parents and teachers tend to report more ADHD symptoms than American parents and teachers as far as the oldest groups of children are concerned. Interactions are evidenced between school achievement, child gender, and informants. A relatively low rate of agreement between informants is found. CONCLUSION: These results strengthen the importance to take into account all informants in the pediatric and the child psychiatry clinic, as well as in the epidemiological studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent findings suggest an association between exposure to cleaning products and respiratory dysfunctions including asthma. However, little information is available about quantitative airborne exposures of professional cleaners to volatile organic compounds deriving from cleaning products. During the first phases of the study, a systematic review of cleaning products was performed. Safety data sheets were reviewed to assess the most frequently added volatile organic compounds. It was found that professional cleaning products are complex mixtures of different components (compounds in cleaning products: 3.5 ± 2.8), and more than 130 chemical substances listed in the safety data sheets were identified in 105 products. The main groups of chemicals were fragrances, glycol ethers, surfactants, solvents; and to a lesser extent phosphates, salts, detergents, pH-stabilizers, acids, and bases. Up to 75% of products contained irritant (Xi), 64% harmful (Xn) and 28% corrosive (C) labeled substances. Hazards for eyes (59%), skin (50%) and by ingestion (60%) were the most reported. Monoethanolamine, a strong irritant and known to be involved in sensitizing mechanisms as well as allergic reactions, is frequently added to cleaning products. Monoethanolamine determination in air has traditionally been difficult and air sampling and analysis methods available were little adapted for personal occupational air concentration assessments. A convenient method was developed with air sampling on impregnated glass fiber filters followed by one step desorption, gas chromatography and nitrogen phosphorous selective detection. An exposure assessment was conducted in the cleaning sector, to determine airborne concentrations of monoethanolamine, glycol ethers, and benzyl alcohol during different cleaning tasks performed by professional cleaning workers in different companies, and to determine background air concentrations of formaldehyde, a known indoor air contaminant. The occupational exposure study was carried out in 12 cleaning companies, and personal air samples were collected for monoethanolamine (n=68), glycol ethers (n=79), benzyl alcohol (n=15) and formaldehyde (n=45). All but ethylene glycol mono-n-butyl ether air concentrations measured were far below (<1/10) of the Swiss eight hours occupational exposure limits, except for butoxypropanol and benzyl alcohol, where no occupational exposure limits were available. Although only detected once, ethylene glycol mono-n-butyl ether air concentrations (n=4) were high (49.5 mg/m3 to 58.7 mg/m3), hovering at the Swiss occupational exposure limit (49 mg/m3). Background air concentrations showed no presence of monoethanolamine, while the glycol ethers were often present, and formaldehyde was universally detected. Exposures were influenced by the amount of monoethanolamine in the cleaning product, cross ventilation and spraying. The collected data was used to test an already existing exposure modeling tool during the last phases of the study. The exposure estimation of the so called Bayesian tool converged with the measured range of exposure the more air concentrations of measured exposure were added. This was best described by an inverse 2nd order equation. The results suggest that the Bayesian tool is not adapted to predict low exposures. The Bayesian tool should be tested also with other datasets describing higher exposures. Low exposures to different chemical sensitizers and irritants should be further investigated to better understand the development of respiratory disorders in cleaning workers. Prevention measures should especially focus on incorrect use of cleaning products, to avoid high air concentrations at the exposure limits. - De récentes études montrent l'existence d'un lien entre l'exposition aux produits de nettoyages et les maladies respiratoires telles que l'asthme. En revanche, encore peu d'informations sont disponibles concernant la quantité d'exposition des professionnels du secteur du nettoyage aux composants organiques volatiles provenant des produits qu'ils utilisent. Pendant la première phase de cette étude, un recueil systématique des produits professionnels utilisés dans le secteur du nettoyage a été effectué. Les fiches de données de sécurité de ces produits ont ensuite été analysées, afin de répertorier les composés organiques volatiles les plus souvent utilisés. Il a été mis en évidence que les produits de nettoyage professionnels sont des mélanges complexes de composants chimiques (composants chimiques dans les produits de nettoyage : 3.5 ± 2.8). Ainsi, plus de 130 substances listées dans les fiches de données de sécurité ont été retrouvées dans les 105 produits répertoriés. Les principales classes de substances chimiques identifiées étaient les parfums, les éthers de glycol, les agents de surface et les solvants; dans une moindre mesure, les phosphates, les sels, les détergents, les régulateurs de pH, les acides et les bases ont été identifiés. Plus de 75% des produits répertoriés contenaient des substances décrites comme irritantes (Xi), 64% nuisibles (Xn) et 28% corrosives (C). Les risques pour les yeux (59%), la peau (50%) et par ingestion (60%) était les plus mentionnés. La monoéthanolamine, un fort irritant connu pour être impliqué dans les mécanismes de sensibilisation tels que les réactions allergiques, est fréquemment ajouté aux produits de nettoyage. L'analyse de la monoéthanolamine dans l'air a été habituellement difficile et les échantillons d'air ainsi que les méthodes d'analyse déjà disponibles étaient peu adaptées à l'évaluation de la concentration individuelle d'air aux postes de travail. Une nouvelle méthode plus efficace a donc été développée en captant les échantillons d'air sur des filtres de fibre de verre imprégnés, suivi par une étape de désorption, puis une Chromatographie des gaz et enfin une détection sélective des composants d'azote. Une évaluation de l'exposition des professionnels a été réalisée dans le secteur du nettoyage afin de déterminer la concentration atmosphérique en monoéthanolamine, en éthers de glycol et en alcool benzylique au cours des différentes tâches de nettoyage effectuées par les professionnels du nettoyage dans différentes entreprises, ainsi que pour déterminer les concentrations atmosphériques de fond en formaldéhyde, un polluant de l'air intérieur bien connu. L'étude de l'exposition professionnelle a été effectuée dans 12 compagnies de nettoyage et les échantillons d'air individuels ont été collectés pour l'éthanolamine (n=68), les éthers de glycol (n=79), l'alcool benzylique (n=15) et le formaldéhyde (n=45). Toutes les substances mesurées dans l'air, excepté le 2-butoxyéthanol, étaient en-dessous (<1/10) de la valeur moyenne d'exposition aux postes de travail en Suisse (8 heures), excepté pour le butoxypropanol et l'alcool benzylique, pour lesquels aucune valeur limite d'exposition n'était disponible. Bien que détecté qu'une seule fois, les concentrations d'air de 2-butoxyéthanol (n=4) étaient élevées (49,5 mg/m3 à 58,7 mg/m3), se situant au-dessus de la frontière des valeurs limites d'exposition aux postes de travail en Suisse (49 mg/m3). Les concentrations d'air de fond n'ont montré aucune présence de monoéthanolamine, alors que les éthers de glycol étaient souvent présents et les formaldéhydes quasiment toujours détectés. L'exposition des professionnels a été influencée par la quantité de monoéthanolamine présente dans les produits de nettoyage utilisés, par la ventilation extérieure et par l'emploie de sprays. Durant la dernière phase de l'étude, les informations collectées ont été utilisées pour tester un outil de modélisation de l'exposition déjà existant, l'outil de Bayesian. L'estimation de l'exposition de cet outil convergeait avec l'exposition mesurée. Cela a été le mieux décrit par une équation du second degré inversée. Les résultats suggèrent que l'outil de Bayesian n'est pas adapté pour mettre en évidence les taux d'expositions faibles. Cet outil devrait également être testé avec d'autres ensembles de données décrivant des taux d'expositions plus élevés. L'exposition répétée à des substances chimiques ayant des propriétés irritatives et sensibilisantes devrait être investiguée d'avantage, afin de mieux comprendre l'apparition de maladies respiratoires chez les professionnels du nettoyage. Des mesures de prévention devraient tout particulièrement être orientées sur l'utilisation correcte des produits de nettoyage, afin d'éviter les concentrations d'air élevées se situant à la valeur limite d'exposition acceptée.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Multiscale Finite Volume (MsFV) method has been developed to efficiently solve reservoir-scale problems while conserving fine-scale details. The method employs two grid levels: a fine grid and a coarse grid. The latter is used to calculate a coarse solution to the original problem, which is interpolated to the fine mesh. The coarse system is constructed from the fine-scale problem using restriction and prolongation operators that are obtained by introducing appropriate localization assumptions. Through a successive reconstruction step, the MsFV method is able to provide an approximate, but fully conservative fine-scale velocity field. For very large problems (e.g. one billion cell model), a two-level algorithm can remain computational expensive. Depending on the upscaling factor, the computational expense comes either from the costs associated with the solution of the coarse problem or from the construction of the local interpolators (basis functions). To ensure numerical efficiency in the former case, the MsFV concept can be reapplied to the coarse problem, leading to a new, coarser level of discretization. One challenge in the use of a multilevel MsFV technique is to find an efficient reconstruction step to obtain a conservative fine-scale velocity field. In this work, we introduce a three-level Multiscale Finite Volume method (MlMsFV) and give a detailed description of the reconstruction step. Complexity analyses of the original MsFV method and the new MlMsFV method are discussed, and their performances in terms of accuracy and efficiency are compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plutonium and americium are radionuclides particularly difficult to measure in environmental samples because they are alpha-emitters and therefore necessitate a careful separation before any measurement, either using radiometric methods or ICP-SMS. Recent developments in extraction chromatography resins such as Eichrom (R) TRU and TEVA have resolved many of the analytical problems but drawbacks such as low recovery and spectral interferences still occasionally occur. Here, we report on the use of the new Eichrom (R) DGA resin in association with TEVA resin and high pressure microwave acid leaching for the sequential determination of plutonium and americium in environmental samples. The method results in average recoveries of 83 +/- 15% for plutonium and 73 +/- 22% for americium (n = 60), and a less than 10% deviation from reference values of four IAEA reference materials and three samples from intercomparisons exercises. The method is also suitable for measuring Pu-239 in water samples at the mu Bq/l level, if ICP-SMS is used for the measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method determining airborne monoethanolamine has been developed. Monoethanolamine determination has traditionally been difficult due to analytical separation problems. Even in recent sophisticated methods, this difficulty remains as the major issue often resulting in time-consuming sample preparations. Impregnated glass fiber filters were used for sampling. Desorption of monoethanolamine was followed by capillary GC analysis and nitrogen phosphorous selective detection. Separation was achieved using a specific column for monoethanolamines (35% diphenyl and 65% dimethyl polysiloxane). The internal standard was quinoline. Derivatization steps were not needed. The calibration range was 0.5-80 μg/mL with a good correlation (R(2) = 0.996). Averaged overall precisions and accuracies were 4.8% and -7.8% for intraday (n = 30), and 10.5% and -5.9% for interday (n = 72). Mean recovery from spiked filters was 92.8% for the intraday variation, and 94.1% for the interday variation. Monoethanolamine on stored spiked filters was stable for at least 4 weeks at 5°C. This newly developed method was used among professional cleaners and air concentrations (n = 4) were 0.42 and 0.17 mg/m(3) for personal and 0.23 and 0.43 mg/m(3) for stationary measurements. The monoethanolamine air concentration method described here was simple, sensitive, and convenient both in terms of sampling and analytical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Pediatric intensive care patients represent a population at high risk for drug-related problems. There are few studies that compare the activity of clinical pharmacists between countries. OBJECTIVE: To describe the drug-related problems identified and interventions by four pharmacists in a pediatric cardiac and intensive care unit. SETTING: Four pediatric centers in France, Quebec, Switzerland and Belgium. METHOD: This was a six-month multicenter, descriptive and prospective study conducted from August 1, 2009 to January 31, 2010. Drug-related problems and clinical interventions were compiled from four pediatric centers in France, Quebec, Switzerland and Belgium. Data on patients, drugs, intervention, documentation, approval and estimated impact were compiled. MAIN OUTCOME MEASURE: Number and type of drug-related problems encountered in a large pediatric inpatient population. RESULTS: A total of 996 interventions were recorded: 238 (24 %) in France, 278 (28 %) in Quebec, 351 (35 %) in Switzerland and 129 (13 %) in Belgium. These interventions targeted 270 patients (median 21 months old, 53 % male): 88 (33 %) in France, 56 (21 %) in Quebec, 57 (21 %) in Switzerland and 69 (26 %) in Belgium. The main drug-related problems were inappropriate administration technique (29 %), untreated indication (25 %) and supra-therapeutic dose (11 %). The pharmacists' interventions were mostly optimizing the mode of administration (22 %), dose adjustment (20 %) and therapeutic monitoring (16 %). The two major drug classes that led to interventions were anti-infectives for systemic use (23 %) and digestive system and metabolism drugs (22 %). Interventions mainly involved residents and all clinical staff (21 %). Among the 878 (88 %) proposed interventions requiring physician approval, 860 (98 %) were accepted. CONCLUSION: This descriptive study illustrates drug-related problems and the ability of clinical pharmacists to identify and resolve them in pediatric intensive care units in four French-speaking countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In this study, we aimed at assessing Inflammatory Bowel Disease patients' needs and current nursing practice to investigate to what extent consensus statements (European Crohn's and Colitis Organization) on the nursing roles in caring for patients with IBD concur with local practice. METHODS: We used a mixed-method convergent design to combine quantitative data prospectively collected in the Swiss IBD cohort study and qualitative data from structured interviews with IBD healthcare experts. Symptoms, quality of life, and anxiety and depression scores were retrieved from physician charts and patient self-reported questionnaires. Descriptive analyses were performed based on quantitative and qualitative data. RESULTS: 230 patients of a single center were included, 60% of patients were males, and median age was 40 (range 18-85). The prevalence of abdominal pain was 42%. Self-reported data were obtained from 75 out of 230 patients. General health was perceived significantly lower compared with the general population (p < 0.001). Prevalence of tiredness was 73%; sleep problems, 78%; issues related to work, 20%; sexual constraints, 35%; diarrhea, 67%; being afraid of not finding a bathroom, 42%; depression, 11%; and anxiety symptoms, 23%. According to experts' interviews, the consensus statements are found mostly relevant with many recommendations that are not yet realized in clinical practice. CONCLUSION: Identified prevalence may help clinicians in detecting patients at risk and improve patient management. © 2015 S. Karger AG, Basel.