994 resultados para volume algorithm
Resumo:
Resumo:O objetivo deste trabalho foi avaliar a eficácia da aplicação de modelos de análise de regressão e redes neurais artificiais (RNAs) na predição do volume de madeira e da biomassa acima do solo, da vegetação arbórea em área de cerradão. Volume de madeira e biomassa foram estimados com equações alométricas desenvolvidas para a área de estudo. Os índices de vegetação, como variáveis preditoras, foram estimados a partir de imagens do sensor LISS-III, e a área basal foi determinada por medições na floresta. A precisão das equações foi verificada pela correlação entre os valores estimados e observados (r), erro-padrão da estimativa (Syx) e gráfico residual. As equações de regressão para o volume de madeira total e do fuste (0,96 e 0,97 para r, e 11,92 e 9,72% para Syx, respectivamente) e para a biomassa (0,91 e 0,92 para r, e 22,73 e 16,80% para Syx, respectivamente) apresentaram bons ajustes. As redes neurais também apresentaram bom ajuste com o volume de madeira (0,99 e 0,99 para r, e 4,93 e 4,83% para Syx) e a biomassa (0,97 e 0,98 r, e 8,92 e 7,96% para Syx, respectivamente). A área basal e os índices de vegetação foram eficazes na estimativa do volume de madeira e biomassa para o cerradão. Os valores reais de volume de madeira e biomassa não diferiram estatisticamente dos valores estimados pelos modelos de regressão e redes neurais (χ2ns); contudo, as RNAs são mais acuradas.
Resumo:
The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Resumo:
Left ventricular hypertrophy (LVH) is due to pressure overload or mechanical stretch and is thought to be associated with remodeling of gap-junctions. We investigated whether the expression of connexin 43 (Cx43) is altered in humans in response to different degrees of LVH. The expression of Cx43 was analyzed by quantitative polymerase chain reaction, Western blot analysis and immunohistochemistry on left ventricular biopsies from patients undergoing aortic or mitral valve replacement. Three groups were analyzed: patients with aortic stenosis with severe LVH (n=9) versus only mild LVH (n=7), and patients with LVH caused by mitral regurgitation (n=5). Cx43 mRNA expression and protein expression were similar in the three groups studied. Furthermore, immunohistochemistry revealed no change in Cx43 distribution. We can conclude that when compared with mild LVH or with LVH due to volume overload, severe LVH due to chronic pressure overload is not accompanied by detectable changes of Cx43 expression or spatial distribution.
Resumo:
BACKGROUND: Reading volume and mammography screening performance appear positively correlated. Quality and effectiveness were compared across low-volume screening programmes targeting relatively small populations and operating under the same decentralised healthcare system. Except for accreditation of 2nd readers (restrictive vs non-restrictive strategy), these organised programmes had similar screening regimen/procedures and duration, which maximises comparability. Variation in performance and its determinants were explored in order to improve mammography practice and optimise screening performance. METHODS: Circa 200,000 screens performed between 1999 and 2006 (4 rounds) in 3 longest standing Swiss cantonal programmes (of Vaud, Geneva and Valais) were assessed. Indicators of quality and effectiveness were assessed according to European standards. Interval cancers were identified through linkage with cancer registries records. RESULTS: Swiss programmes met most European standards of performance with a substantial, favourable cancer stage shift. Up to a two-fold variation occurred for several performance indicators. In subsequent rounds, compared with programmes (Vaud and Geneva) that applied a restrictive selection strategy for 2nd readers, proportions of in situ lesions and of small cancers (≤1cm) were one third lower and halved, respectively, and the proportion of advanced lesions (stage II+) nearly 50% higher in the programme without a restrictive selection strategy. Discrepancy in second-year proportional incidence of interval cancers appears to be multicausal. CONCLUSION: Differences in performance could partly be explained by a selective strategy for second readers and a prior experience in service screening, but not by the levels of opportunistic screening and programme attendance. This study provides clues for enhancing mammography screening performance in low-volume programmes.
Resumo:
Ce travail concerne les 26 premiers patients opérés à Lausanne de juillet 1995 à avril 1998 d?une gie de réduction de volume pulmonaire pour un emphysème sévère. Il présente leurs actéristiques préopératoires générales, leur évaluation radiologique et cardiologique, ainsi que mesures fonctionnelles pulmonaires. Il examine ensuite la technique opératoire, la période ériopératoire, ainsi que les résultats fonctionnels postopératoires des patients. De plus il présente les entretiens ayant eu lieu avec chaque patient pour connaître son évaluation de l?opération et de ges résultats. Ces informations sont ensuite comparées avec les résultats présents dans la littérature. Le profil préopératoire, la technique opératoire, la période péripopératoire, et les résultats fonctionnels postopératoires sont comparés. Un groupe de patients qui n?a pas répondu au point de vue fonctionnel A la chirurgie est ensuite examiné afin de tenter de définir des critères préopératoires de bonne ou mauvaise réponse, et ceux-ci sont mis en relation avec les données de la littérature. On compare ensuite le groupe de mauvais répondeurs fonctionnels avec le groupe des patients ne s?estimant pas améliorés par l?opération. En faisant une étude statistiques des valeurs fonctionnelles pré et postopératoire, on observe une amélioration significative du VEMS, du volume résiduel, de la PaC02, de la distance parcounie au test de marche de six minutes et de la dyspnée engendrée par le test sur une échelle de Borg. L?amélioration de la Pa02 postopératoire et de la saturation en oxygène pendant le test de marche n?était pas significative statistiquement. On peut voir que ce collectif de patients lausannois est comparable aux autres séries au niveau du profil préopératoire et de la gravité de l?atteinte fonctionnelle, au niveau de la technique opératoire, de la mortalité et morbidité, et des résultats fonctionnels postopératoires. On observe comme différence un volume résiduel moyen préopératoire plus élevé et davantage amélioré en postopératoire que dans la littérature. Cette étude ne met pas en évidence - et c?est également le cas dans la littérature - de critères clairs préopératoires prédictifs d?une bonne réponse à la chirurgie. De même elle n?a pas mis en évidence de concordance entre l?amélioration de la symptomatologie des patients et les résultats fonctionnels postopératoires. 11 serait donc intéressant de poursuivre ce travail par une évaluation prospective plus systématique des patients opérés et de pouvoir la comparer avec les résultats des patients récusés ou refusant la chirurgie de réduction de volume pulmonaire, afin de pouvoir mieux en évaluer les bénéfices à long terme.
Resumo:
Adaptació de l'algorisme de Kumar per resoldre sistemes d'equacions amb matrius de Toeplitz sobre els reals a cossos finits en un temps 0 (n log n).
Resumo:
La principal motivació d'aquest treball ha estat implementar l'algoritme Rijndael-AES en un full Sage-math, paquet de software matemàtic de lliure distribució i en actual desenvolupament, aprofitant les seves eines i funcionalitats integrades.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
The purpose of this study was to investigate some important features of granular flows and suspension flows by computational simulation methods. Granular materials have been considered as an independent state ofmatter because of their complex behaviors. They sometimes behave like a solid, sometimes like a fluid, and sometimes can contain both phases in equilibrium. The computer simulation of dense shear granular flows of monodisperse, spherical particles shows that the collisional model of contacts yields the coexistence of solid and fluid phases while the frictional model represents a uniform flow of fluid phase. However, a comparison between the stress signals from the simulations and experiments revealed that the collisional model would result a proper match with the experimental evidences. Although the effect of gravity is found to beimportant in sedimentation of solid part, the stick-slip behavior associated with the collisional model looks more similar to that of experiments. The mathematical formulations based on the kinetic theory have been derived for the moderatesolid volume fractions with the assumption of the homogeneity of flow. In orderto make some simulations which can provide such an ideal flow, the simulation of unbounded granular shear flows was performed. Therefore, the homogeneous flow properties could be achieved in the moderate solid volume fractions. A new algorithm, namely the nonequilibrium approach was introduced to show the features of self-diffusion in the granular flows. Using this algorithm a one way flow can beextracted from the entire flow, which not only provides a straightforward calculation of self-diffusion coefficient but also can qualitatively determine the deviation of self-diffusion from the linear law at some regions nearby the wall inbounded flows. Anyhow, the average lateral self-diffusion coefficient, which was calculated by the aforementioned method, showed a desirable agreement with thepredictions of kinetic theory formulation. In the continuation of computer simulation of shear granular flows, some numerical and theoretical investigations were carried out on mass transfer and particle interactions in particulate flows. In this context, the boundary element method and its combination with the spectral method using the special capabilities of wavelets have been introduced as theefficient numerical methods to solve the governing equations of mass transfer in particulate flows. A theoretical formulation of fluid dispersivity in suspension flows revealed that the fluid dispersivity depends upon the fluid properties and particle parameters as well as the fluid-particle and particle-particle interactions.