946 resultados para General circulation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Delivering sufficient dose to tumours while sparing surrounding tissue is one of the primary challenges of radiotherapy, and in common practice this is typically achieved by using highly penetrating MV photon beams and spatially shaping dose. However, there has been a recent increase in interest in the possibility of using contrast agents with high atomic number to enhance the dose deposited in tumours when used in conjunction with kV x-rays, which see a significant increase in absorption due to the heavy element's high-photoelectric cross-section at such energies. Unfortunately, the introduction of such contrast agents significantly complicates the comparison of different source types for treatment efficacy, as the dose deposited now depends very strongly on the exact composition of the spectrum, making traditional metrics such as beam quality less valuable. To address this, a 'figure of merit' is proposed, which yields a value which enables the direct comparison of different source types for tumours at different depths inside a patient. This figure of merit is evaluated for a 15 MV LINAC source and two 150 kVp sources (both of which make use of a tungsten target, one with conventional aluminium filtration, while the other uses a more aggressive thorium filter) through analytical methods as well as numerical models, considering tissue treated with a realistic concentration and uptake ratio of gold nanoparticle contrast agents (10 mg ml(-1) concentration in 'tumour' volume, 10: 1 uptake ratio). Finally, a test case of human neck phantom is considered with a similar contrast agent to compare the abstract figure to a more realistic treatment situation. Good agreement was found both between the different approaches to calculate the figure of merit, and between the figure of merit and the effectiveness in a more realistic patient scenario. Together, these observations suggest that there is the potential for contrast-enhanced kilovoltage radiation to be a useful therapeutic tool for a number of classes of tumour on dosimetric considerations alone, and they point to the need for further research in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The degradation of resorbable polymeric devices often takes months to years. Accelerated testing at elevated temperatures is an attractive but controversial technique. The purposes of this paper include: (a) to provide a summary of the mathematical models required to analyse accelerated degradation data and to indicate the pitfalls of using these models; (b) to improve the model previously developed by Han and Pan; (c) to provide a simple version of the model of Han and Pan with an analytical solution that is convenient to use; (d) to demonstrate the application of the improved model in two different poly(lactic acid) systems. It is shown that the simple analytical relations between molecular weight and degradation time widely used in the literature can lead to inadequate conclusions. In more general situations the rate equations are only part of a complete degradation model. Together with previous works in the literature, our study calls for care in using the accelerated testing technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polypropylene (PP), a semi-crystalline material, is typically solid phase thermoformed at temperatures associated with crystalline melting, generally in the 150° to 160°Celsius range. In this very narrow thermoforming window the mechanical properties of the material rapidly decline with increasing temperature and these large changes in properties make Polypropylene one of the more difficult materials to process by thermoforming. Measurement of the deformation behaviour of a material under processing conditions is particularly important for accurate numerical modelling of thermoforming processes. This paper presents the findings of a study into the physical behaviour of industrial thermoforming grades of Polypropylene. Practical tests were performed using custom built materials testing machines and thermoforming equipment at Queen′s University Belfast. Numerical simulations of these processes were constructed to replicate thermoforming conditions using industry standard Finite Element Analysis software, namely ABAQUS and custom built user material model subroutines. Several variant constitutive models were used to represent the behaviour of the Polypropylene materials during processing. This included a range of phenomenological, rheological and blended constitutive models. The paper discusses approaches to modelling industrial plug-assisted thermoforming operations using Finite Element Analysis techniques and the range of material models constructed and investigated. It directly compares practical results to numerical predictions. The paper culminates discussing the learning points from using Finite Element Methods to simulate the plug-assisted thermoforming of Polypropylene, which presents complex contact, thermal, friction and material modelling challenges. The paper makes recommendations as to the relative importance of these inputs in general terms with regard to correlating to experimentally gathered data. The paper also presents recommendations as to the approaches to be taken to secure simulation predictions of improved accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glucagon-like peptide-1(7-36)amide (tGLP-1) is an important insulin-releasing hormone of the enteroinsular axis which is secreted by endocrine L-cells of the small intestine following nutrient ingestion. The present study has evaluated tGLP-1 in the intestines of normal and diabetic animal models and estimated the proportion present in glycated form. Total immunoreactive tGLP-1 levels in the intestines of hyperglycaemic hydrocortisone-treated rats, streptozotocin-treated mice and ob/ob mice were similar to age-matched controls. Affinity chromatographic separation of glycated and non-glycated proteins in intestinal extracts followed by radioimmunoassay using a fully crossreacting anti-serum demonstrated the presence of glycated tGLP-1 within the intestinal extracts of all control animals (approximately 19%., of total tGLP-1 content). Chemically induced and spontaneous animal models of diabetes were found to possess significantly greater levels of glycated tGLP-1 than controls, corresponding to between 24-71% of the total content. These observations suggest that glycated tGLP-1 may be of physiological significance given that such N-terminal modification confers resistance to DPP IV inactivation and degradation, extending the very short half-life (

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD), a two-factor obsessions and compulsions model, and a multidimensional model corresponding to the three proposed subscales of the Short LOI-CV (labelled Obsessions/Incompleteness, Numbers/Luck and Cleanliness) were considered. The three-factor model was the only model to provide an adequate explanation of the data. Twin analyses suggested significant quantitative sex differences in heritability for both the Obsessions/Incompleteness and Numbers/Luck dimensions with these being significantly heritable in males only (heritability of 60% and 65% respectively). The correlation between the additive genetic effects for these two dimensions in males was 0.95 suggesting they largely share the same genetic risk factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently [A. Xuereb, et al., Phys. Rev. Lett. 105, 013602 (2010)], we calculated the radiation field and the optical forces acting on a moving object inside a general one-dimensional configuration of immobile optical elements. In this article we analyse the forces acting on a semi-transparent mirror in the 'membrane-in-the-middle' configuration and compare the results obtained from solving scattering model to those from the coupled cavities model that is often used in cavity optomechanical system. We highlight the departure of this model from the more exact scattering theory when the reflectivity of the moving element drops below about 50%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present results for a suite of 14 three-dimensional, high-resolution hydrodynamical simulations of delayed-detonation models of Type Ia supernova (SN Ia) explosions. This model suite comprises the first set of three-dimensional SN Ia simulations with detailed isotopic yield information. As such, it may serve as a data base for Chandrasekhar-mass delayed-detonation model nucleosynthetic yields and for deriving synthetic observables such as spectra and light curves. We employ aphysically motivated, stochastic model based on turbulent velocity fluctuations and fuel density to calculate in situ the deflagration-to-detonation transition probabilities. To obtain different strengths of the deflagration phase and thereby different degrees of pre-expansion, we have chosen a sequence of initial models with 1, 3, 5, 10, 20, 40, 100, 150, 200, 300 and 1600 (two different realizations) ignition kernels in a hydrostatic white dwarf with a central density of 2.9 × 10 g cm, as well as one high central density (5.5 × 10 g cm) and one low central density (1.0 × 10 g cm) rendition of the 100 ignition kernel configuration. For each simulation, we determined detailed nucleosynthetic yields by postprocessing10 tracer particles with a 384 nuclide reaction network. All delayed-detonation models result in explosions unbinding thewhite dwarf, producing a range of 56Ni masses from 0.32 to 1.11M. As a general trend, the models predict that the stableneutron-rich iron-group isotopes are not found at the lowest velocities, but rather at intermediate velocities (~3000×10 000 km s) in a shell surrounding a Ni-rich core. The models further predict relatively low-velocity oxygen and carbon, with typical minimum velocities around 4000 and 10 000 km s, respectively. © 2012 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consideration of the ethical, social, and policy implications of research has become increasingly important to scientists and scholars whose work focuses on brain and mind, but limited empirical data exist on the education in ethics available to them. We examined the current landscape of ethics training in neuroscience programs, beginning with the Canadian context specifically, to elucidate the perceived needs of mentors and trainees and offer recommendations for resource development to meet those needs. We surveyed neuroscientists at all training levels and interviewed directors of neuroscience programs and training grants. A total of 88% of survey respondents reported general interest in ethics, and 96% indicated a desire for more ethics content as it applies to brain research and clinical translation. Expert interviews revealed formal ethics education in over half of programs and in 90% of grants-based programs. Lack of time, resources, and expertise, however, are major barriers to expanding ethics content in neuroscience education. We conclude with an initial set of recommendations to address these barriers which includes the development of flexible, tailored ethics education tools, increased financial support for ethics training, and strategies for fostering collaboration between ethics experts, neuroscience program directors, and funding agencies. © 2010 the Authors. Journal Compilation © 2010 International Mind, Brain, and Education Society and Blackwell Publishing, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On multiprocessors with explicitly managed memory hierarchies (EMM), software has the responsibility of moving data in and out of fast local memories. This task can be complex and error-prone even for expert programmers. Before we can allow compilers to handle the complexity for us, we must identify the abstractions that are general enough to allow us to write applications with reasonable effort, yet speci?c enough to exploit the vast on-chip memory bandwidth of EMM multi-processors. To this end, we compare two programming models against hand-tuned codes on the STI Cell, paying attention to programmability and performance. The ?rst programming model, Sequoia, abstracts the memory hierarchy as private address spaces, each corresponding to a parallel task. The second, Cellgen, is a new framework which provides OpenMP-like semantics and the abstraction of a shared address spaces divided into private and shared data. We compare three applications programmed using these models against their hand-optimized counterparts in terms of abstractions, programming complexity, and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of model selection of a univariate long memory time series is investigated once a semi parametric estimator for the long memory parameter has been used. Standard information criteria are not consistent in this case. A Modified Information Criterion (MIC) that overcomes these difficulties is introduced and proofs that show its asymptotic validity are provided. The results are general and cover a wide range of short memory processes. Simulation evidence compares the new and existing methodologies and empirical applications in monthly inflation and daily realized volatility are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particulate systems are of interest in many disciplines. They are often investigated using the discrete element method because of its capability to investigate particulate systems at the individual particle scale. To model the interaction between two particles and between a particle and a boundary, conventional discrete element models use springs and dampers in both the normal and tangential directions. The significance of particle rotation has been highlighted in both numerical studies and physical experiments. Several researchers have attempted to incorporate a rotational torque to account for the rolling resistance or rolling friction by developing different models. This paper presents a review of the commonly used models for rolling resistance and proposes a more general model. These models are classified into four categories according to their key characteristics. The robustness of these models in reproducing rolling resistance effects arising from different physical situations was assessed by using several benchmarking test cases. The proposed model can be seen to be more general and suitable for modelling problems involving both dynamic and pseudo-static regimes. An example simulation of the formation of a 2D sandpile is also shown. For simplicity, all formulations and examples are presented in 2D form, though the general conclusions are also applicable to 3D systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different classes of constitutive models have been proposed to capture the time-dependent behaviour of soft soil (creep, stress relaxation, rate dependency). This paper critically reviews many of the models developed based on understanding of the time dependent stress-strain-stress rate-strain rate behaviour of soils and viscoplasticity in terms of their strengths and weaknesses. Some discussion is also made on the numerical implementation aspects of these models. Typical findings from numerical analyses of geotechnical structures constructed on soft soils are also discussed. The general elastic viscoplastic (EVP) models can roughly be divided into two categories: models based on the concept of overstress and models based on non-stationary flow surface theory. Although general in structure, both categories have their own strengths and shortcomings. This review indicates that EVP analysis is yet to be vastly used by the geotechnical engineers, apparently due to the mathematical complication involved in the formulation of the constitutive models, unconvincing benefit in terms of the accuracy of performance prediction, requirement of additional soil parameter(s), difficulties in determining them, and the necessity of excessive computing resources and time. © 2013 Taylor & Francis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment of marine downscaling of global model simulations to the regional scale is a prerequisite for understanding ocean feedback to the atmosphere in regional climate downscaling. Major difficulties arise from the coarse grid resolution of global models, which cannot provide sufficiently accurate boundary values for the regional model. In this study, we first setup a stretched global model (MPIOM) to focus on the North Sea by shifting poles. Second, a regional model (HAMSOM) was performed with higher resolution, while the open boundary values were provided by the stretched global model. In general, the sea surface temperatures (SSTs) in the two experiments are similar. Major SST differences are found in coastal regions (root mean square difference of SST is reaching up to 2°C). The higher sea surface salinity in coastal regions in the global model indicates the general limitation of this global model and its configuration (surface layer thickness is 16 m). By comparison, the advantage of the absence of open lateral boundaries in the global model can be demonstrated, in particular for the transition region between the North Sea and Baltic Sea. On long timescales, the North Atlantic Current (NAC) inflow through the northern boundary correlates well between both model simulations (R~0.9). After downscaling with HAMSOM, the NAC inflow through the northern boundary decreases by ~10%, but the circulation in the Skagerrak is stronger in HAMSOM. The circulation patterns of both models are similar in the northern North Sea. The comparison suggests that the stretched global model system is a suitable tool for long-term free climate model simulations, and the only limitations occur in coastal regions. Regarding the regional studies focusing on the coastal zone, nested regional model can be a helpful alternative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims - To investigate whether young people with Type 1 diabetes have an increased rate of depression andantidepressant use and whether their risk varies by age group, time from diabetes diagnosis, calendar period ofdiagnosis or complications status. Methods - A cohort of incident cases of patients with Type 1 diabetes diagnosed before 35 years of age (n = 5548) wasidentified within the Clinical Practice Research Datalink and individually age and sex matched with up to two controlsubjects without diabetes (n = 10 657). Patients with depression were identified through general practice-recordeddepression codes and antidepressant prescriptions. Cox regression models gave hazard ratios for depression in peoplewith Type 1 diabetes compared with control subjects. Results - People with Type 1 diabetes were twice as likely to have a record of antidepressant use and generalpractice-diagnosed depression as their matched control subjects (hazard ratio 2.08, 95% CI 1.73–2.50, P < 0.001).These associations varied by time from diagnosis, with marked increases observed within the first 5 years of diagnosis(hazard ratio 2.14, 95% CI 1.51–3.03, P < 0.001), and by age at diabetes diagnosis, with excesses noted even in the 10-to 19-year age group (hazard ratio 1.45, 95% CI 1.06–1.98, P = 0.02). Conclusions - This population-based study shows that people with Type 1 diabetes have higher rates of generalpractice-recorded depression and antidepressant use. The excess is present within 5 years of diabetes diagnosis,suggesting psychological input for patients is warranted in the early years of their condition.