6 resultados para Branch-and-bound algorithm

em DigitalCommons@The Texas Medical Center


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A detailed microdosimetric characterization of the M. D. Anderson 42 MeV (p,Be) fast neutron beam was performed using the techniques of microdosimetry and a 1/2 inch diameter Rossi proportional counter. These measurements were performed at 5, 15, and 30 cm depths on the central axis, 3 cm inside, and 3 cm outside the field edge for 10 $\times$ 10 and 20 $\times$ 20 cm field sizes. Spectra were also measured at 5 and 15 cm depth on central axis for a 6 $\times$ 6 cm field size. Continuous slowing down approximation calculations were performed to model the nuclear processes that occur in the fast neutron beam. Irradiation of the CR-39 was performed using a tandem electrostatic accelerator for protons of 10, 6, and 3 MeV and alpha particles of 15, 10, and 7 MeV incident energy on target at angles of incidence from 0 to 85 degrees. The critical angle as well as track etch rate and normal incidence diameter versus linear energy transfer (LET) were obtained from these measurements. The bulk etch rate was also calculated from these measurements. Dose response of the material was studied, and the angular distribution of charged particles created by the fast neutron beam was measured with CR-39. The efficiency of CR-39 was calculated versus that of the Rossi chamber, and an algorithm was devised for derivation of LET spectra from the major and minor axis dimensions of the observed tracks. The CR-39 was irradiated in the same positions as the Rossi chamber, and the derived spectra were compared directly. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well accepted that tumorigenesis is a multi-step procedure involving aberrant functioning of genes regulating cell proliferation, differentiation, apoptosis, genome stability, angiogenesis and motility. To obtain a full understanding of tumorigenesis, it is necessary to collect information on all aspects of cell activity. Recent advances in high throughput technologies allow biologists to generate massive amounts of data, more than might have been imagined decades ago. These advances have made it possible to launch comprehensive projects such as (TCGA) and (ICGC) which systematically characterize the molecular fingerprints of cancer cells using gene expression, methylation, copy number, microRNA and SNP microarrays as well as next generation sequencing assays interrogating somatic mutation, insertion, deletion, translocation and structural rearrangements. Given the massive amount of data, a major challenge is to integrate information from multiple sources and formulate testable hypotheses. This thesis focuses on developing methodologies for integrative analyses of genomic assays profiled on the same set of samples. We have developed several novel methods for integrative biomarker identification and cancer classification. We introduce a regression-based approach to identify biomarkers predictive to therapy response or survival by integrating multiple assays including gene expression, methylation and copy number data through penalized regression. To identify key cancer-specific genes accounting for multiple mechanisms of regulation, we have developed the integIRTy software that provides robust and reliable inferences about gene alteration by automatically adjusting for sample heterogeneity as well as technical artifacts using Item Response Theory. To cope with the increasing need for accurate cancer diagnosis and individualized therapy, we have developed a robust and powerful algorithm called SIBER to systematically identify bimodally expressed genes using next generation RNAseq data. We have shown that prediction models built from these bimodal genes have the same accuracy as models built from all genes. Further, prediction models with dichotomized gene expression measurements based on their bimodal shapes still perform well. The effectiveness of outcome prediction using discretized signals paves the road for more accurate and interpretable cancer classification by integrating signals from multiple sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry.^ A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

$\beta$-adrenergic receptor-mediated activation of adenylate cyclase exhibits an agonist-specific separation between the dose/response curve (characterized by the EC$\sb{50}$) and the dose/binding curve (characterized by the K$\sb{\rm d}$). Cyclase activity can be near-maximal when receptor occupancy is quite low (EC$\sb{50}$ $\ll$ K$\sb{\rm d}$). This separation between the binding and response curves can be explained by the assumption that the rate of cyclase activation is proportional to the concentration of agonist-bound receptors, since the receptor is mobile and can activate more than one cyclase (the Collision Coupling Model of Tolkovsky and Levitzki). Here it is established that agonist binding frequency plays an additional role in adenylate cyclase activation in S49 murine lymphoma cells. Using epinephrine (EC$\sb{50}$ = 10 nM, K$\sb{\rm d}$ = 2 $\mu$M), the rate of cyclase activation decreased by 80% when a small (1.5%) receptor occupancy was restricted (by addition of the antagonist propranolol) to a small number (1.5%) of receptors rather than being proportionally distributed among the cell's entire population of receptors. Thus adenylate cyclase activity is not proportional to receptor occupancy in all circumstances. Collisions between receptor and cyclase pairs apparently occur a number of times in rapid sequence (an encounter); the high binding frequency of epinephrine ensures that discontiguous regions of the cell surface experience some period of agonist-bound receptor activity per small unit time minimizing "wasted" collisions between activated cyclase and bound receptor within an encounter. A contribution of agonist binding frequency to activation is thus possible when: (1) the mean lifetime of the agonist-receptor complex is shorter than the mean encounter time, and (2) the absolute efficiency (intrinsic ability to promote cyclase activation per collision) of the agonist-receptor complex is high. These conclusions are supported by experiments using agonists of different efficiencies and binding frequencies. These results are formalized in the Encounter Coupling Model of adenylate cyclase activation, which takes into explicit account the agonist binding frequency, agonist affinity for the $\beta$-adrenergic receptor, agonist efficiency, encounter frequency and the encounter time between receptor and cyclase. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

p53 mutations are the most commonly observed genetic alterations in human cancers to date. A majority of these point mutations cluster in four evolutionarily conserved domains spanning amino acids 100-300. This region of p53 has been called its central conserved, or conformational domain. This domain of p53 is also targeted by the SV40 T antigen. Mutation, as well as interaction with SV40 T antigen results in inactivation of p53. We hypothesized that mutations and SV40 T antigen disrupt p53 function by interfering with the molecular interactions of the central conserved domain. Using a chimeric protein consisting of the central conserved domain of wild-type p53 (amino acids 115-295) and a protein A affinity tail, we isolated several cellular proteins that interact specifically with this domain of p53. These proteins range in size from 30K to 90K M$\rm\sb{r}.$ We also employed the p53 fusion protein to demonstrate that the central conserved domain of p53 possesses sequence-specific DNA-binding activity. Interestingly, the cellular proteins binding to the central conserved domain of p53 enhance the sequence-specific DNA-binding activity of full length p53. Partial purification of the individual proteins binding to the conformational domain of p53 by utilizing a sodium chloride step-gradient enabled further characterization of two proteins: (1) a 42K M$\rm\sb{r}$ protein that eluted at 0.5M NaCl, and bound DNA nonspecifically, and (2) a 35K M$\rm\sb{r}$ protein eluting into the 1.0M NaCl fraction, capable of enhancing the sequence-specific DNA-binding activity of p53. In order to determine the physiologic relevance of the molecular interactions of the conformational domain of p53, we examined the biochemical processes underlying the TNF-$\alpha$ mediated growth suppression of the NSCLC cell line H460. While growth suppression was accompanied by enhanced sequence-specific p53-DNA binding activity in TNF-$\alpha$ treated H460 nuclei, there was no increase in p53 protein levels. Furthermore, p35 was upregulated in TNF-$\alpha$ treated H460 cells, suggesting that the enhanced p53-DNA binding seen in these cells may be mediated by p35. Our studies define two novel interactions involving the central conserved domain of p53 that appear to be functionally relevant: (1) sequence-specific DNA-binding, and (2) interaction with other cellular proteins. ^