23 resultados para General circulation models
Resumo:
Context: Shared care models integrating family physician services with interdisciplinary palliative care specialist teams are critical to improve access to quality palliative home care and address multiple domains of end-of-life issues and needs. Objectives: To examine the impact of a shared care pilot program on the primary outcomes of symptom severity and emotional distress (patient and family separately) over time and, secondarily, the concordance between patient preferences and place of death. Methods: An inception cohort of patients (n = 95) with advanced, progressive disease, expected to die within six months, were recruited from three rural family physician group practices (21 physicians) and followed prospectively until death or pilot end. Serial measurement of symptoms, emotional distress (patient and family), and preferences for place of death was performed, with analysis of changes in distress outcomes assessed using t-tests and general linear models. Results: Symptoms trended toward improvement, with a significant reduction in anxiety from baseline to 14 days noted. Symptom and emotional distress were maintained below high severity (7-10), and a high rate of home death compared with population norms was observed. Conclusion: Future controlled studies are needed to examine outcomes for shared care models with comparison groups. Shared care models build on family physician capacity and as such are promising in the development of palliative home care programs to improve access to quality palliative home care and foster health system integration. © 2011 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Resumo:
We report on our findings based on the analysis of observations of the Type II-L supernova LSQ13cuw within the framework of currently accepted physical predictions of core-collapse supernova explosions. LSQ13cuw was discovered within a day of explosion, hitherto unprecedented for Type II-L supernovae. This motivated a comparative study of Type II-P and II-L supernovae with relatively well-constrained explosion epochs and rise times to maximum (optical) light. From our sample of twenty such events, we find evidence of a positive correlation between the duration of the rise and the peak brightness. On average, SNe II-L tend to have brighter peak magnitudes and longer rise times than SNe II-P. However, this difference is clearest only at the extreme ends of the rise time versus peak brightness relation. Using two different analytical models, we performed a parameter study to investigate the physical parameters that control the rise time behaviour. In general, the models qualitatively reproduce aspects of the observed trends. We find that the brightness of the optical peak increases for larger progenitor radii and explosion energies, and decreases for larger masses. The dependence of the rise time on mass and explosion energy is smaller than the dependence on the progenitor radius. We find no evidence that the progenitors of SNe II-L have significantly smaller radii than those of SNe II-P.
Resumo:
The ideal free distribution model which relates the spatial distribution of mobile consumers to that of their resource is shown to be a limiting case of a more general model which we develop using simple concepts of diffusion. We show how the ideal free distribution model can be derived from a more general model and extended by incorporating simple models of social influences on predator spacing. First, a free distribution model based on patch switching rules, with a power-law interference term, which represents instantaneous biased diffusion is derived. A social bias term is then introduced to represent the effect of predator aggregation on predator fitness, separate from any effects which act through intake rate. The social bias term is expanded to express an optimum spacing for predators and example solutions of the resulting biased diffusion models are shown. The model demonstrates how an empirical interference coefficient, derived from measurements of predator and prey densities, may include factors expressing the impact of social spacing behaviour on fitness. We conclude that empirical values of log predator/log prey ratio may contain information about more than the relationship between consumer and resource densities. Unlike many previous models, the model shown here applies to conditions without continual input. (C) 1997 Academic Press Limited.</p>
Resumo:
The prevalence of multicore processors is bound to drive most kinds of software development towards parallel programming. To limit the difficulty and overhead of parallel software design and maintenance, it is crucial that parallel programming models allow an easy-to-understand, concise and dense representation of parallelism. Parallel programming models such as Cilk++ and Intel TBBs attempt to offer a better, higher-level abstraction for parallel programming than threads and locking synchronization. It is not straightforward, however, to express all patterns of parallelism in these models. Pipelines are an important parallel construct, although difficult to express in Cilk and TBBs in a straightfor- ward way, not without a verbose restructuring of the code. In this paper we demonstrate that pipeline parallelism can be easily and concisely expressed in a Cilk-like language, which we extend with input, output and input/output dependency types on procedure arguments, enforced at runtime by the scheduler. We evaluate our implementation on real applications and show that our Cilk-like scheduler, extended to track and enforce these dependencies has performance comparable to Cilk++.
Resumo:
Delivering sufficient dose to tumours while sparing surrounding tissue is one of the primary challenges of radiotherapy, and in common practice this is typically achieved by using highly penetrating MV photon beams and spatially shaping dose. However, there has been a recent increase in interest in the possibility of using contrast agents with high atomic number to enhance the dose deposited in tumours when used in conjunction with kV x-rays, which see a significant increase in absorption due to the heavy element's high-photoelectric cross-section at such energies. Unfortunately, the introduction of such contrast agents significantly complicates the comparison of different source types for treatment efficacy, as the dose deposited now depends very strongly on the exact composition of the spectrum, making traditional metrics such as beam quality less valuable. To address this, a 'figure of merit' is proposed, which yields a value which enables the direct comparison of different source types for tumours at different depths inside a patient. This figure of merit is evaluated for a 15 MV LINAC source and two 150 kVp sources (both of which make use of a tungsten target, one with conventional aluminium filtration, while the other uses a more aggressive thorium filter) through analytical methods as well as numerical models, considering tissue treated with a realistic concentration and uptake ratio of gold nanoparticle contrast agents (10 mg ml(-1) concentration in 'tumour' volume, 10: 1 uptake ratio). Finally, a test case of human neck phantom is considered with a similar contrast agent to compare the abstract figure to a more realistic treatment situation. Good agreement was found both between the different approaches to calculate the figure of merit, and between the figure of merit and the effectiveness in a more realistic patient scenario. Together, these observations suggest that there is the potential for contrast-enhanced kilovoltage radiation to be a useful therapeutic tool for a number of classes of tumour on dosimetric considerations alone, and they point to the need for further research in this area.
Resumo:
The degradation of resorbable polymeric devices often takes months to years. Accelerated testing at elevated temperatures is an attractive but controversial technique. The purposes of this paper include: (a) to provide a summary of the mathematical models required to analyse accelerated degradation data and to indicate the pitfalls of using these models; (b) to improve the model previously developed by Han and Pan; (c) to provide a simple version of the model of Han and Pan with an analytical solution that is convenient to use; (d) to demonstrate the application of the improved model in two different poly(lactic acid) systems. It is shown that the simple analytical relations between molecular weight and degradation time widely used in the literature can lead to inadequate conclusions. In more general situations the rate equations are only part of a complete degradation model. Together with previous works in the literature, our study calls for care in using the accelerated testing technique.
Resumo:
Polypropylene (PP), a semi-crystalline material, is typically solid phase thermoformed at temperatures associated with crystalline melting, generally in the 150° to 160°Celsius range. In this very narrow thermoforming window the mechanical properties of the material rapidly decline with increasing temperature and these large changes in properties make Polypropylene one of the more difficult materials to process by thermoforming. Measurement of the deformation behaviour of a material under processing conditions is particularly important for accurate numerical modelling of thermoforming processes. This paper presents the findings of a study into the physical behaviour of industrial thermoforming grades of Polypropylene. Practical tests were performed using custom built materials testing machines and thermoforming equipment at Queen′s University Belfast. Numerical simulations of these processes were constructed to replicate thermoforming conditions using industry standard Finite Element Analysis software, namely ABAQUS and custom built user material model subroutines. Several variant constitutive models were used to represent the behaviour of the Polypropylene materials during processing. This included a range of phenomenological, rheological and blended constitutive models. The paper discusses approaches to modelling industrial plug-assisted thermoforming operations using Finite Element Analysis techniques and the range of material models constructed and investigated. It directly compares practical results to numerical predictions. The paper culminates discussing the learning points from using Finite Element Methods to simulate the plug-assisted thermoforming of Polypropylene, which presents complex contact, thermal, friction and material modelling challenges. The paper makes recommendations as to the relative importance of these inputs in general terms with regard to correlating to experimentally gathered data. The paper also presents recommendations as to the approaches to be taken to secure simulation predictions of improved accuracy.
Resumo:
Glucagon-like peptide-1(7-36)amide (tGLP-1) is an important insulin-releasing hormone of the enteroinsular axis which is secreted by endocrine L-cells of the small intestine following nutrient ingestion. The present study has evaluated tGLP-1 in the intestines of normal and diabetic animal models and estimated the proportion present in glycated form. Total immunoreactive tGLP-1 levels in the intestines of hyperglycaemic hydrocortisone-treated rats, streptozotocin-treated mice and ob/ob mice were similar to age-matched controls. Affinity chromatographic separation of glycated and non-glycated proteins in intestinal extracts followed by radioimmunoassay using a fully crossreacting anti-serum demonstrated the presence of glycated tGLP-1 within the intestinal extracts of all control animals (approximately 19%., of total tGLP-1 content). Chemically induced and spontaneous animal models of diabetes were found to possess significantly greater levels of glycated tGLP-1 than controls, corresponding to between 24-71% of the total content. These observations suggest that glycated tGLP-1 may be of physiological significance given that such N-terminal modification confers resistance to DPP IV inactivation and degradation, extending the very short half-life (
Resumo:
An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD), a two-factor obsessions and compulsions model, and a multidimensional model corresponding to the three proposed subscales of the Short LOI-CV (labelled Obsessions/Incompleteness, Numbers/Luck and Cleanliness) were considered. The three-factor model was the only model to provide an adequate explanation of the data. Twin analyses suggested significant quantitative sex differences in heritability for both the Obsessions/Incompleteness and Numbers/Luck dimensions with these being significantly heritable in males only (heritability of 60% and 65% respectively). The correlation between the additive genetic effects for these two dimensions in males was 0.95 suggesting they largely share the same genetic risk factors.
Resumo:
Recently [A. Xuereb, et al., Phys. Rev. Lett. 105, 013602 (2010)], we calculated the radiation field and the optical forces acting on a moving object inside a general one-dimensional configuration of immobile optical elements. In this article we analyse the forces acting on a semi-transparent mirror in the 'membrane-in-the-middle' configuration and compare the results obtained from solving scattering model to those from the coupled cavities model that is often used in cavity optomechanical system. We highlight the departure of this model from the more exact scattering theory when the reflectivity of the moving element drops below about 50%.
Resumo:
We present results for a suite of 14 three-dimensional, high-resolution hydrodynamical simulations of delayed-detonation models of Type Ia supernova (SN Ia) explosions. This model suite comprises the first set of three-dimensional SN Ia simulations with detailed isotopic yield information. As such, it may serve as a data base for Chandrasekhar-mass delayed-detonation model nucleosynthetic yields and for deriving synthetic observables such as spectra and light curves. We employ aphysically motivated, stochastic model based on turbulent velocity fluctuations and fuel density to calculate in situ the deflagration-to-detonation transition probabilities. To obtain different strengths of the deflagration phase and thereby different degrees of pre-expansion, we have chosen a sequence of initial models with 1, 3, 5, 10, 20, 40, 100, 150, 200, 300 and 1600 (two different realizations) ignition kernels in a hydrostatic white dwarf with a central density of 2.9 × 10 g cm, as well as one high central density (5.5 × 10 g cm) and one low central density (1.0 × 10 g cm) rendition of the 100 ignition kernel configuration. For each simulation, we determined detailed nucleosynthetic yields by postprocessing10 tracer particles with a 384 nuclide reaction network. All delayed-detonation models result in explosions unbinding thewhite dwarf, producing a range of 56Ni masses from 0.32 to 1.11M. As a general trend, the models predict that the stableneutron-rich iron-group isotopes are not found at the lowest velocities, but rather at intermediate velocities (~3000×10 000 km s) in a shell surrounding a Ni-rich core. The models further predict relatively low-velocity oxygen and carbon, with typical minimum velocities around 4000 and 10 000 km s, respectively. © 2012 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.