894 resultados para non-standard lexical functions
Resumo:
Thèse réalisée en cotutelle avec l'Université Paris Diderot (Paris 7)
Resumo:
Depuis quelques années, il y a un intérêt de la communauté en dosimétrie d'actualiser les protocoles de dosimétrie des faisceaux larges tels que le TG-51 (AAPM) et le TRS-398 (IAEA) aux champs non standard qui requièrent un facteur de correction additionnel. Or, ces facteurs de correction sont difficiles à déterminer précisément dans un temps acceptable. Pour les petits champs, ces facteurs augmentent rapidement avec la taille de champ tandis que pour les champs d'IMRT, les incertitudes de positionnement du détecteur rendent une correction cas par cas impraticable. Dans cette étude, un critère théorique basé sur la fonction de réponse dosimétrique des détecteurs est développé pour déterminer dans quelles situations les dosimètres peuvent être utilisés sans correction. Les réponses de quatre chambres à ionisation, d'une chambre liquide, d'un détecteur au diamant, d'une diode, d'un détecteur à l'alanine et d'un détecteur à scintillation sont caractérisées à 6 MV et 25 MV. Plusieurs stratégies sont également suggérées pour diminuer/éliminer les facteurs de correction telles que de rapporter la dose absorbée à un volume et de modifier les matériaux non sensibles du détecteur pour pallier l'effet de densité massique. Une nouvelle méthode de compensation de la densité basée sur une fonction de perturbation est présentée. Finalement, les résultats démontrent que le détecteur à scintillation peut mesurer les champs non standard utilisés en clinique avec une correction inférieure à 1%.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Aim A new method of penumbral analysis is implemented which allows an unambiguous determination of field size and penumbra size and quality for small fields and other non-standard fields. Both source occlusion and lateral electronic disequilibrium will affect the size and shape of cross-axis profile penumbrae; each is examined in detail. Method A new method of penumbral analysis is implemented where the square of the derivative of the cross-axis profile is plotted. The resultant graph displays two peaks in the place of the two penumbrae. This allows a strong visualisation of the quality of a field penumbra, as well as a mathematically consistent method of determining field size (distance between the two peak’s maxima), and penumbra (full-widthtenth-maximum of peak). Cross-axis profiles were simulated in a water phantom at a depth of 5 cm using Monte Carlo modelling, for field sizes between 5 and 30 mm. The field size and penumbra size of each field was calculated using the method above, as well as traditional definitions set out in IEC976. The effect of source occlusion and lateral electronic disequilibrium on the penumbrae was isolated by repeating the simulations removing electron transport and using an electron spot size of 0 mm, respectively. Results All field sizes calculated using the traditional and proposed methods agreed within 0.2 mm. The penumbra size measured using the proposed method was systematically 1.8 mm larger than the traditional method at all field sizes. The size of the source had a larger effect on the size of the penumbra than did lateral electronic disequilibrium, particularly at very small field sizes. Conclusion Traditional methods of calculating field size and penumbra are proved to be mathematically adequate for small fields. However, the field size definition proposed in this study would be more robust amongst other nonstandard fields, such as flattening filter free. Source occlusion plays a bigger role than lateral electronic disequilibrium in small field penumbra size.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.
Resumo:
We discuss solvability issues of H_-/H_2/infinity optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard H_-/H_2/infinity optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the H- index, we provide a complete solution of this problem in the case of H2-norm. Furthermore, we discuss the solvability issues in the case of H-infinity-norm.
Resumo:
We discuss solvability issues of ℍ -/ℍ 2/∞ optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard ℍ -/ ℍ 2/∞ optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the ℍ -- index, we provide a complete solution of this problem in the case of ℍ 2-norm. Furthermore, we discuss the solvability issues in the case of ℍ ∞-norm. © 2011 IEEE.
Resumo:
This study investigated non-specific immune functions of the F-2 generation of "all-fish" growth hormone transgenic carp, Cyprinus carpio L. Lysozyme activity was 145.0 (+/- 30.7) U ml(-1) in the transgenic fish serum and 105.0 (+/- 38.7) U ml(-1) in age-matched non-transgenic control fish serum, a significant difference (P < 0.01). The serum bactericidal activity in the transgenics was significantly higher than that in the controls (P < 0.05), with the percentage serum killing of 59.5% (6.83%) and 50.8% (8.67%), respectively. Values for leukocrit and phagocytic percent of macrophages in head kidney were higher in transgenics than controls (P < 0.05). However, the phagocytic indices in the transgenics and the controls were not different. In addition, the mean body weight of the transgenics was 63.4 (6.65) g, much higher than that of the controls [39.2 (+/- 3.30) g, P < 0.01]. The absolute weight of spleen of the transgenics [0.13 (+/- 0.03) g] was higher than that of the controls [0.08 (+/- 0.02) g, P < 0.01]. However, there was no difference in the relative weight of spleen between the transgenics and the controls, with the spleen mass index being 0.21% (+/- 0.02%) and 0.20% (+/- 0.03%), respectively. This study suggests that the "all-fish" growth hormone transgene expression could stimulate not only the growth but also the non-specific immune functions of carp. (c) 2006 Published by Elsevier B.V.