959 resultados para DIMENSIONAL MODEL
Resumo:
South Peak is a 7-Mm3 potentially unstable rock mass located adjacent to the 1903 Frank Slide on Turtle Mountain, Alberta. This paper presents three-dimensional numerical rock slope stability models and compares them with a previous conceptual slope instability model based on discontinuity surfaces identified using an airborne LiDAR digital elevation model (DEM). Rock mass conditions at South Peak are described using the Geological Strength Index and point load tests, whilst the mean discontinuity set orientations and characteristics are based on approximately 500 field measurements. A kinematic analysis was first conducted to evaluate probable simple discontinuity-controlled failure modes. The potential for wedge failure was further assessed by considering the orientation of wedge intersections over the airborne LiDAR DEM and through a limit equilibrium combination analysis. Block theory was used to evaluate the finiteness and removability of blocks in the rock mass. Finally, the complex interaction between discontinuity sets and the topography within South Peak was investigated through three-dimensional distinct element models using the code 3DEC. The influence of individual discontinuity sets, scale effects, friction angle and the persistence along the discontinuity surfaces on the slope stability conditions were all investigated using this code.
Resumo:
We investigate in this note the dynamics of a one-dimensional Keller-Segel type model on the half-line. On the contrary to the classical configuration, the chemical production term is located on the boundary. We prove, under suitable assumptions, the following dichotomy which is reminiscent of the two-dimensional Keller-Segel system. Solutions are global if the mass is below the critical mass, they blow-up in finite time above the critical mass, and they converge to some equilibrium at the critical mass. Entropy techniques are presented which aim at providing quantitative convergence results for the subcritical case. This note is completed with a brief introduction to a more realistic model (still one-dimensional).
Resumo:
AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.
Resumo:
This paper characterizes a mixed strategy Nash equilibrium in a one-dimensional Downsian model of two-candidate elections with a continuous policy space, where candidates are office motivated and one candidate enjoys a non-policy advantage over the other candidate. We assume that voters have quadratic preferences over policies and that their ideal points are drawn from a uniform distribution over the unit interval. In our equilibrium the advantaged candidate chooses the expected median voter with probability one and the disadvantaged candidate uses a mixed strategy that is symmetric around it. We show that this equilibrium exists if the number of voters is large enough relative to the size of the advantage.
Resumo:
A mathematical model is developed to analyse the combined flow and solidification of a liquid in a small pipe or two-dimensional channel. In either case the problem reduces to solving a single equation for the position of the solidification front. Results show that for a large range of flow rates the closure time is approximately constant, and the value depends primarily on the wall temperature and channel width. However, the ice shape at closure will be very different for low and high fluxes. As the flow rate increases the closure time starts to depend on the flow rate until the closure time increases dramatically, subsequently the pipe will never close.
Resumo:
In this paper a model is developed to describe the three dimensional contact melting process of a cuboid on a heated surface. The mathematical description involves two heat equations (one in the solid and one in the melt), the Navier-Stokes equations for the flow in the melt, a Stefan condition at the phase change interface and a force balance between the weight of the solid and the countering pressure in the melt. In the solid an optimised heat balance integral method is used to approximate the temperature. In the liquid the small aspect ratio allows the Navier-Stokes and heat equations to be simplified considerably so that the liquid pressure may be determined using an igenfunction expansion and finally the problem is reduced to solving three first order ordinary differential equations. Results are presented showing the evolution of the melting process. Further reductions to the system are made to provide simple guidelines concerning the process. Comparison of the solutions with experimental data on the melting of n-octadecane shows excellent agreement.
Resumo:
PURPOSE: To determine the diagnostic value of the intravascular contrast agent gadocoletic acid (B-22956) in three-dimensional, free breathing coronary magnetic resonance angiography (MRA) for stenosis detection in patients with suspected or known coronary artery disease. METHODS: Eighteen patients underwent three-dimensional, free breathing coronary MRA of the left and right coronary system before and after intravenous application of a single dose of gadocoletic acid (B-22956) using three different dose regimens (group A 0.050 mmol/kg; group B 0.075 mmol/kg; group C 0.100 mmol/kg). Precontrast scanning followed a coronary MRA standard non-contrast T2 preparation/turbo-gradient echo sequence (T2Prep); for postcontrast scanning an inversion-recovery gradient echo sequence was used (real-time navigator correction for both scans). In pre- and postcontrast scans quantitative analysis of coronary MRA data was performed to determine the number of visible side branches, vessel length and vessel sharpness of each of the three coronary arteries (LAD, LCX, RCA). The number of assessable coronary artery segments was determined to calculate sensitivity and specificity for detection of stenosis > or = 50% on a segment-to-segment basis (16-segment-model) in pre- and postcontrast scans with x-ray coronary angiography as the standard of reference. RESULTS: Dose group B (0.075 mmol/kg) was preferable with regard to improvement of MR angiographic parameters: in postcontrast scans all MR angiographic parameters increased significantly except for the number of visible side branches of the left circumflex artery. In addition, assessability of coronary artery segments significantly improved postcontrast in this dose group (67 versus 88%, p < 0.01). Diagnostic performance (sensitivity, specificity, accuracy) was 83, 77 and 78% for precontrast and 86, 95 and 94% for postcontrast scans. CONCLUSIONS: The use of gadocoletic acid (B-22956) results in an improvement of MR angiographic parameters, asssessability of coronary segments and detection of coronary stenoses > or = 50%.
Resumo:
Viruses rapidly evolve, and HIV in particular is known to be one of the fastest evolving human viruses. It is now commonly accepted that viral evolution is the cause of the intriguing dynamics exhibited during HIV infections and the ultimate success of the virus in its struggle with the immune system. To study viral evolution, we use a simple mathematical model of the within-host dynamics of HIV which incorporates random mutations. In this model, we assume a continuous distribution of viral strains in a one-dimensional phenotype space where random mutations are modelled by di ffusion. Numerical simulations show that random mutations combined with competition result in evolution towards higher Darwinian fitness: a stable traveling wave of evolution, moving towards higher levels of fi tness, is formed in the phenoty space.
Resumo:
In this paper a one-phase supercooled Stefan problem, with a nonlinear relation between the phase change temperature and front velocity, is analysed. The model with the standard linear approximation, valid for small supercooling, is first examined asymptotically. The nonlinear case is more difficult to analyse and only two simple asymptotic results are found. Then, we apply an accurate heat balance integral method to make further progress. Finally, we compare the results found against numerical solutions. The results show that for large supercooling the linear model may be highly inaccurate and even qualitatively incorrect. Similarly as the Stefan number β → 1&sup&+&/sup& the classic Neumann solution which exists down to β =1 is far from the linear and nonlinear supercooled solutions and can significantly overpredict the solidification rate.
Resumo:
The effects of the nongray absorption (i.e., atmospheric opacity varying with wavelength) on the possible upper bound of the outgoing longwave radiation (OLR) emitted by a planetary atmosphere have been examined. This analysis is based on the semigray approach, which appears to be a reasonable compromise between the complexity of nongray models and the simplicity of the gray assumption (i.e., atmospheric absorption independent of wavelength). Atmospheric gases in semigray atmospheres make use of constant absorption coefficients in finite-width spectral bands. Here, such a semigray absorption is introduced in a one-dimensional (1D) radiative– convective model with a stratosphere in radiative equilibrium and a troposphere fully saturated with water vapor, which is the semigray gas. A single atmospheric window in the infrared spectrum has been assumed. In contrast to the single absolute limit of OLR found in gray atmospheres, semigray ones may also show a relative limit. This means that both finite and infinite runaway effects may arise in some semigray cases. Of particular importance is the finding of an entirely new branch of stable steady states that does not appear in gray atmospheres. This new multiple equilibrium is a consequence of the nongray absorption only. It is suspected that this new set of stable solutions has not been previously revealed in analyses of radiative–convective models since it does not appear for an atmosphere with nongray parameters similar to those for the earth’s current state
Resumo:
We report experimental and numerical results showing how certain N-dimensional dynamical systems are able to exhibit complex time evolutions based on the nonlinear combination of N-1 oscillation modes. The experiments have been done with a family of thermo-optical systems of effective dynamical dimension varying from 1 to 6. The corresponding mathematical model is an N-dimensional vector field based on a scalar-valued nonlinear function of a single variable that is a linear combination of all the dynamic variables. We show how the complex evolutions appear associated with the occurrence of successive Hopf bifurcations in a saddle-node pair of fixed points up to exhaust their instability capabilities in N dimensions. For this reason the observed phenomenon is denoted as the full instability behavior of the dynamical system. The process through which the attractor responsible for the observed time evolution is formed may be rather complex and difficult to characterize. Nevertheless, the well-organized structure of the time signals suggests some generic mechanism of nonlinear mode mixing that we associate with the cluster of invariant sets emerging from the pair of fixed points and with the influence of the neighboring saddle sets on the flow nearby the attractor. The generation of invariant tori is likely during the full instability development and the global process may be considered as a generalized Landau scenario for the emergence of irregular and complex behavior through the nonlinear superposition of oscillatory motions
Resumo:
Brain inflammatory response is triggered by the activation of microglial cells and astrocytes in response to various types of CNS injury, including neurotoxic insults. Its outcome is determined by cellular interactions, inflammatory mediators, as well as trophic and/or cytotoxic signals, and depends on many additional factors such as the intensity and duration of the insult, the extent of both the primary neuronal damage and glial reactivity and the developmental stage of the brain. Depending on particular circumstances, the brain inflammatory response can promote neuroprotection, regeneration or neurodegeneration. Glial reactivity, regarded as the central phenomenon of brain inflammation, has also been used as an early marker of neurotoxicity. To study the mechanisms underlying the glial reactivity, serum-free aggregating brain cell cultures were used as an in vitro model to test the effects of conventional neurotoxicants such as organophosphate pesticides, heavy metals, excitotoxins and mycotoxins. This approach was found to be relevant and justified by the complex cell-cell interactions involved in the brain inflammatory response, the variability of the glial reactions and the multitude of mediators involved. All these variables need to be considered for the elucidation of the specific cellular and molecular reactions and their consequences caused by a given chemical insult.
Resumo:
This note describes how the Kalman filter can be modified to allow for thevector of observables to be a function of lagged variables without increasing the dimensionof the state vector in the filter. This is useful in applications where it is desirable to keepthe dimension of the state vector low. The modified filter and accompanying code (whichnests the standard filter) can be used to compute (i) the steady state Kalman filter (ii) thelog likelihood of a parameterized state space model conditional on a history of observables(iii) a smoothed estimate of latent state variables and (iv) a draw from the distribution oflatent states conditional on a history of observables.
Resumo:
This paper examines competition in the standard one-dimensional Downsian model of two-candidate elections, but where one candidate (A) enjoys an advantage over the other candidate (D). Voters' preferences are Euclidean, but any voter will vote for candidate A over candidate D unless D is closer to her ideal point by some fixed distance \delta. The location of the median voter's ideal point is uncertain, and its distribution is commonly known by both candidates. The candidates simultaneously choose locations to maximize the probability of victory. Pure strategy equilibria often fails to exist in this model, except under special conditions about \delta and the distribution of the median ideal point. We solve for the essentially unique symmetric mixed equilibrium, show that candidate A adopts more moderate policies than candidate D, and obtain some comparative statics results about the probability of victory and the expected distance between the two candidates' policies.
Resumo:
An impaired glutathione (GSH) synthesis was observed in several multifactorial diseases, including schizophrenia and myocardial infarction. Genetic studies revealed an association between schizophrenia and a GAG trinucleotide repeat (TNR) polymorphism in the catalytic subunit (GCLC) of the glutamate cysteine ligase (GCL). Disease-associated genotypes of this polymorphism correlated with a decrease in GCLC protein expression, GCL activity and GSH content. To clarify consequences of a decreased GCL activity at the proteome level, three schizophrenia patients and three controls have been selected based on the GCLC GAG TNR polymorphism. Fibroblast cultures were obtained by skin biopsy and were challenged with tert-butylhydroquinone (t-BHQ), a substance known to induce oxidative stress. Proteome changes were analyzed by two dimensional gel electrophoresis (2-DE) and results revealed 10 spots that were upregulated in patients following t-BHQ treatment, but not in controls. Nine corresponding proteins could be identified by MALDI mass spectrometry and these proteins are involved in various cellular functions, including energy metabolism, oxidative stress response, and cytoskeletal reorganization. In conclusion, skin fibroblasts of subjects with an impaired GSH synthesis showed an altered proteome reaction in response to oxidative stress. Furthermore, the study corroborates the use of fibroblasts as an additional mean to study vulnerability factors of psychiatric diseases.