931 resultados para Biologically optimal dose combination
Resumo:
In this paper, load profile and operational goal are used to find optimal sizing of combined PV-energy storage for a future grid-connected residential building. As part of this approach, five operational goals are introduced and the annual cost for each operation goal has been assessed. Finally, the optimal sizing for combined PV-energy storage has been determined, using direct search method. In addition, sensitivity of the annual cost to different parameters has been analyzed.
Resumo:
Two sources of uncertainty in the X ray computed tomography imaging of polymer gel dosimeters are investigated in the paper.The first cause is a change in postirradiation density, which is proportional to the computed tomography signal and is associated with a volume change. The second cause of uncertainty is reconstruction noise.A simple technique that increases the residual signal to noise ratio by almost two orders of magnitude is examined.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
The transplantation of autologous bone graft as a treatment for large bone defects has the limitation of harvesting co-morbidity and limited availability. This drives the orthopaedic research community to develop bone graft substitutes. Routinely, supra-physiological doses of bone morphogenetic proteins (BMPs) are applied perpetuating concerns over undesired side effects and cost of BMPs. We therefore aimed to design a composite scaffold that allows maintenance of protein bioactivity and enhances growth factor retention at the implantation site. Critical-sized defects in sheep tibiae were treated with the autograft and with two dosages of rhBMP-7, 3.5 mg and 1.75 mg, embedded in a slowly degradable medical grade poly(ε-caprolactone) (PCL) scaffold with β-tricalcium phosphate microparticles (mPCL-TCP). Specimens were characterised by biomechanical testing, microcomputed tomography and histology. Bridging was observed within 3 months for the autograft and both rhBMP-7 treatments. No significant difference was observed between the low and high rhBMP-7 dosages or between any of the rhBMP-7 groups and autograft implantation. Scaffolds alone did not induce comparable levels of bone formation compared to the autograft and rhBMP-7 groups. In summary, the mPCL-TCP scaffold with the lower rhBMP-7 dose led to equivalent results to autograft transplantation or the high BMP dosage. Our data suggest a promising clinical future for BMP application in scaffold-based bone tissue engineering, lowering and optimising the amount of required BMP.
Resumo:
Dose-finding designs estimate the dose level of a drug based on observed adverse events. Relatedness of the adverse event to the drug has been generally ignored in all proposed design methodologies. These designs assume that the adverse events observed during a trial are definitely related to the drug, which can lead to flawed dose-level estimation. We incorporate adverse event relatedness into the so-called continual reassessment method. Adverse events that have ‘doubtful’ or ‘possible’ relationships to the drug are modelled using a two-parameter logistic model with an additive probability mass. Adverse events ‘probably’ or ‘definitely’ related to the drug are modelled using a cumulative logistic model. To search for the maximum tolerated dose, we use the maximum estimated toxicity probability of these two adverse event relatedness categories. We conduct a simulation study that illustrates the characteristics of the design under various scenarios. This article demonstrates that adverse event relatedness is important for improved dose estimation. It opens up further research pathways into continual reassessment design methodologies.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
Background: Display technologies which allow peptides or proteins to be physically associated with the encoding DNA are central to procedures which involve screening of protein libraries in vitro for new or altered function. Here we describe a new system designed specifically for the display of libraries of diverse, functional proteins which utilises the DNA binding protein nuclear factor κB (NF-κB) p50 to establish a phenotype–genotype link between the displayed protein and the encoding gene. Results: A range of model fusion proteins to either the amino- or carboxy-terminus of NF-κB p50 have been constructed and shown to retain the picomolar affinity and DNA specificity of wild-type NF-κB p50. Through use of an optimal combination of binding buffer and DNA target sequence, the half-life of p50–DNA complexes could be increased to over 47 h, enabling the competitive selection of a variety of protein–plasmid complexes with enrichment factors of up to 6000-fold per round. The p50-based plasmid display system was used to enrich a maltose binding protein complex to homogeneity in only three rounds from a binary mixture with a starting ratio of 1:108 and to enrich to near homogeneity a single functional protein from a phenotype–genotype linked Escherichia coli genomic library using in vitro functional selections. Conclusions: A new display technology is described which addresses the challenge of functional protein display. The results demonstrate that plasmid display is sufficiently sensitive to select a functional protein from large libraries and that it therefore represents a useful addition to the repertoire of display technologies.
Resumo:
Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.
Resumo:
Introduction Given the known challenges of obtaining accurate measurements of small radiation fields, and the increasing use of small field segments in IMRT beams, this study examined the possible effects of referencing inaccurate field output factors in the planning of IMRT treatments. Methods This study used the Brainlab iPlan treatment planning system to devise IMRT treatment plans for delivery using the Brainlab m3 microMLC (Brainlab, Feldkirchen, Germany). Four pairs of sample IMRT treatments were planned using volumes, beams and prescriptions that were based on a set of test plans described in AAPM TG 119’s recommendations for the commissioning of IMRT treatment planning systems [1]: • C1, a set of three 4 cm volumes with different prescription doses, was modified to reduce the size of the PTV to 2 cm across and to include an OAR dose constraint for one of the other volumes. • C2, a prostate treatment, was planned as described by the TG 119 report [1]. • C3, a head-and-neck treatment with a PTV larger than 10 cm across, was excluded from the study. • C4, an 8 cm long C-shaped PTV surrounding a cylindrical OAR, was planned as described in the TG 119 report [1] and then replanned with the length of the PTV reduced to 4 cm. Both plans in each pair used the same beam angles, collimator angles, dose reference points, prescriptions and constraints. However, one of each pair of plans had its beam modulation optimisation and dose calculation completed with reference to existing iPlan beam data and the other had its beam modulation optimisation and dose calculation completed with reference to revised beam data. The beam data revisions consisted of increasing the field output factor for a 0.6 9 0.6 cm2 field by 17 % and increasing the field output factor for a 1.2 9 1.2 cm2 field by 3 %. Results The use of different beam data resulted in different optimisation results with different microMLC apertures and segment weightings between the two plans for each treatment, which led to large differences (up to 30 % with an average of 5 %) between reference point doses in each pair of plans. These point dose differences are more indicative of the modulation of the plans than of any clinically relevant changes to the overall PTV or OAR doses. By contrast, the maximum, minimum and mean doses to the PTVs and OARs were smaller (less than 1 %, for all beams in three out of four pairs of treatment plans) but are more clinically important. Of the four test cases, only the shortened (4 cm) version of TG 119’s C4 plan showed substantial differences between the overall doses calculated in the volumes of interest using the different sets of beam data and thereby suggested that treatment doses could be affected by changes to small field output factors. An analysis of the complexity of this pair of plans, using Crowe et al.’s TADA code [2], indicated that iPlan’s optimiser had produced IMRT segments comprised of larger numbers of small microMLC leaf separations than in the other three test cases. Conclusion: The use of altered small field output factors can result in substantially altered doses when large numbers of small leaf apertures are used to modulate the beams, even when treating relatively large volumes.
Resumo:
Background Display technologies which allow peptides or proteins to be physically associated with the encoding DNA are central to procedures which involve screening of protein libraries in vitro for new or altered function. Here we describe a new system designed specifically for the display of libraries of diverse, functional proteins which utilises the DNA binding protein nuclear factor κB (NF-κB) p50 to establish a phenotype-genotype link between the displayed protein and the encoding gene. Results A range of model fusion proteins to either the amino- or carboxy-terminus of NF-κB p50 have been constructed and shown to retain the picomolar affinity and DNA specificity of wild-type NF-κB p50. Through use of an optimal combination of binding buffer and DNA target sequence, the half-life of p50-DNA complexes could be increased to over 47 h, enabling the competitive selection of a variety of protein-plasmid complexes with enrichment factors of up to 6000-fold per round. The p50-based plasmid display system was used to enrich a maltose binding protein complex to homogeneity in only three rounds from a binary mixture with a starting ratio of 1:108 and to enrich to near homogeneity a single functional protein from a phenotype-genotype linked Escherichia coli genomic library using in vitro functional selections. Conclusions A new display technology is described which addresses the challenge of functional protein display. The results demonstrate that plasmid display is sufficiently sensitive to select a functional protein from large libraries and that it therefore represents a useful addition to the repertoire of display technologies.
Resumo:
The reporting and auditing of patient dose is an important component of radiotherapy quality assurance. The manual extraction of dose-volume metrics is time consuming and undesirable when auditing the dosimetric quality of a large cohort of patient plans. A dose assessment application was written to overcome this, allowing the calculation of various dose-volume metrics for large numbers of plans exported from treatment planning systems. This application expanded on the DICOM-handling functionality of the MCDTK software suite. The software extracts dose values in the volume of interest by using a ray casting point-in-polygon algorithm, where the polygons have been defined by the contours in the RTSTRUCT file...
Resumo:
Objective Recently, Taylor et al. reported that use of the BrainLAB m3 microMLC, for stereotactic radiosurgery, results in a decreased out-of-field dose in the direction of leaf-motion compared to the outof- field dose measured in the direction orthogonal to leaf-motion [1]. It was recommended that, where possible, patients should be treated with their superior–inferior axes aligned with the microMLCs leafmotion direction, to minimise out-of-field doses [1]. This study aimed, therefore, to examine the causes of this asymmetry in outof- field dose and, in particular, to establish that a similar recommendation need not be made for radiotherapy treatments delivered by linear accelerators without external micro-collimation systems. Methods Monte Carlo simulations were used to study out-of-field dose from different linear accelerators (the Varian Clinacs 21iX and 600C and the Elekta Precise) with and without internal MLCs and external microMLCs [2]. Results Simulation results for the Varian Clinac 600C linear accelerator with BrainLAB m3 microMLC confirm Taylor et als [1] published experimental data. The out-of-field dose in the leaf motion direction is deposited by lower energy (more obliquely scattered) photons than the out-of-field dose in the orthogonal direction. Linear accelerators without microMLCs produce no asymmetry in out-offield dose. Conclusions The asymmetry in out-of-field dose previously measured by Taylor et al. [1] results from the shielding characteristics of the BrainLAB m3 microMLC device and is not produced by the linear accelerator to which it is attached.
Resumo:
Cancers of the brain and central nervous system account for 1.6% of new cancers and 1.8% of cancer deaths globally. The highest rates of all developed nations are observed in Australia and New Zealand. There are known complexities associated with dose measurement of very small radiation fields. Here, 3D dosimetric verification of treatments for small intracranial tumours using gel dosimetry was investigated.
Resumo:
Jackson (2005) developed a hybrid model of personality and learning, known as the learning styles profiler (LSP) which was designed to span biological, socio-cognitive, and experiential research foci of personality and learning research. The hybrid model argues that functional and dysfunctional learning outcomes can be best understood in terms of how cognitions and experiences control, discipline, and re-express the biologically based scale of sensation-seeking. In two studies with part-time workers undertaking tertiary education (N equals 137 and 58), established models of approach and avoidance from each of the three different research foci were compared with Jackson's hybrid model in their predictiveness of leadership, work, and university outcomes using self-report and supervisor ratings. Results showed that the hybrid model was generally optimal and, as hypothesized, that goal orientation was a mediator of sensation-seeking on outcomes (work performance, university performance, leader behaviours, and counterproductive work behaviour). Our studies suggest that the hybrid model has considerable promise as a predictor of work and educational outcomes as well as dysfunctional outcomes.
Resumo:
This study explored the interaction between physical and psychosocial factors in the workplace on neck pain and disability in female computer users. A self-report survey was used to collect data on physical risk factors (monitor location, duration of time spent using the keyboard and mouse) and psychosocial domains (as assessed by the Job Content Questionnaire). The neck disability index was the outcome measure. Interactions among the physical and psychosocial factors were examined in analysis of covariance. High supervisor support, decision authority and skill discretion protect against the negative impact of (1) time spent on computer-based tasks, (2) non-optimal placement of the computer monitor, and; (3) long duration of mouse use. Office workers with greater neck pain experience a combination of high physical and low psychosocial stressors at work. Prevention and intervention strategies that target both sets of risk factors are likely to be more successful than single intervention programmes. Statement of Relevance The results of this study demonstrate that the interaction of physical and psychosocial factors in the workplace has a stronger association with neck pain and disability than the presence of either factor alone. This finding has important implications for strategies aimed at the prevention of musculoskeletal problems in office workers.