869 resultados para Models and Methods
Resumo:
AIMS: To investigate the potential dosimetric and clinical benefits predicted by using four-dimensional computed tomography (4DCT) compared with 3DCT in the planning of radical radiotherapy for non-small cell lung cancer.
MATERIALS AND METHODS:
Twenty patients were planned using free breathing 4DCT then retrospectively delineated on three-dimensional helical scan sets (3DCT). Beam arrangement and total dose (55 Gy in 20 fractions) were matched for 3D and 4D plans. Plans were compared for differences in planning target volume (PTV) geometrics and normal tissue complication probability (NTCP) for organs at risk using dose volume histograms. Tumour control probability and NTCP were modelled using the Lyman-Kutcher-Burman (LKB) model. This was compared with a predictive clinical algorithm (Maastro), which is based on patient characteristics, including: age, performance status, smoking history, lung function, tumour staging and concomitant chemotherapy, to predict survival and toxicity outcomes. Potential therapeutic gains were investigated by applying isotoxic dose escalation to both plans using constraints for mean lung dose (18 Gy), oesophageal maximum (70 Gy) and spinal cord maximum (48 Gy).
RESULTS:
4DCT based plans had lower PTV volumes, a lower dose to organs at risk and lower predicted NTCP rates on LKB modelling (P < 0.006). The clinical algorithm showed no difference for predicted 2-year survival and dyspnoea rates between the groups, but did predict for lower oesophageal toxicity with 4DCT plans (P = 0.001). There was no correlation between LKB modelling and the clinical algorithm for lung toxicity or survival. Dose escalation was possible in 15/20 cases, with a mean increase in dose by a factor of 1.19 (10.45 Gy) using 4DCT compared with 3DCT plans.
CONCLUSIONS:
4DCT can theoretically improve therapeutic ratio and dose escalation based on dosimetric parameters and mathematical modelling. However, when individual characteristics are incorporated, this gain may be less evident in terms of survival and dyspnoea rates. 4DCT allows potential for isotoxic dose escalation, which may lead to improved local control and better overall survival.
Resumo:
Quantum annealing is a promising tool for solving optimization problems, similar in some ways to the traditional ( classical) simulated annealing of Kirkpatrick et al. Simulated annealing takes advantage of thermal fluctuations in order to explore the optimization landscape of the problem at hand, whereas quantum annealing employs quantum fluctuations. Intriguingly, quantum annealing has been proved to be more effective than its classical counterpart in many applications. We illustrate the theory and the practical implementation of both classical and quantum annealing - highlighting the crucial differences between these two methods - by means of results recently obtained in experiments, in simple toy-models, and more challenging combinatorial optimization problems ( namely, Random Ising model and Travelling Salesman Problem). The techniques used to implement quantum and classical annealing are either deterministic evolutions, for the simplest models, or Monte Carlo approaches, for harder optimization tasks. We discuss the pro and cons of these approaches and their possible connections to the landscape of the problem addressed.
Resumo:
In a companion paper, Seitenzahl et al. have presented a set of three-dimensional delayed detonation models for thermonuclear explosions of near-Chandrasekhar-mass white dwarfs (WDs). Here,we present multidimensional radiative transfer simulations that provide synthetic light curves and spectra for those models. The model sequence explores both changes in the strength of the deflagration phase (which is controlled by the ignition configuration in our models) and the WD central density. In agreement with previous studies, we find that the strength of the deflagration significantly affects the explosion and the observables. Variations in the central density also have an influence on both brightness and colour, but overall it is a secondary parameter in our set of models. In many respects, the models yield a good match to the observed properties of normal Type Ia supernovae (SNe Ia): peak brightness, rise/decline time-scales and synthetic spectra are all in reasonable agreement. There are, however, several differences. In particular, the models are systematically too red around maximum light, manifest spectral line velocities that are a little too high and yield I-band light curves that do not match observations. Although some of these discrepancies may simply relate to approximations made in the modelling, some pose real challenges to the models. If viewed as a complete sequence, our models do not reproduce the observed light-curve width- luminosity relation (WLR) of SNe Ia: all our models show rather similar B-band decline rates, irrespective of peak brightness. This suggests that simple variations in the strength of the deflagration phase in Chandrasekhar-mass deflagration-to-detonation models do not readily explain the observed diversity of normal SNe Ia. This may imply that some other parameter within the Chandrasekhar-mass paradigm is key to the WLR, or that a substantial fraction of normal SNe Ia arise from an alternative explosion scenario.
Integrating Multiple Point Statistics with Aerial Geophysical Data to assist Groundwater Flow Models
Resumo:
The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively ‘noisy’ magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.
Resumo:
Background
Organ dysfunction consequent to infection (‘severe sepsis’) is the leading cause of admission to an intensive care unit (ICU). In both animal models and early clinical studies the calcium channel sensitizer levosimendan has been demonstrated to have potentially beneficial effects on organ function. The aims of the Levosimendan for the Prevention of Acute oRgan Dysfunction in Sepsis (LeoPARDS) trial are to identify whether a 24-hour infusion of levosimendan will improve organ dysfunction in adults who have septic shock and to establish the safety profile of levosimendan in this group of patients.
Methods/DesignThis is a multicenter, randomized, double-blind, parallel group, placebo-controlled trial. Adults fulfilling the criteria for systemic inflammatory response syndrome due to infection, and requiring vasopressor therapy, will be eligible for inclusion in the trial. Within 24 hours of meeting these inclusion criteria, patients will be randomized in a 1:1 ratio stratified by the ICU to receive either levosimendan (0.05 to 0.2 μg.kg-1.min-1 or placebo for 24 hours in addition to standard care. The primary outcome measure is the mean Sequential Organ Failure Assessment (SOFA) score while in the ICU. Secondary outcomes include: central venous oxygen saturations and cardiac output; incidence and severity of renal failure using the Acute Kidney Injury Network criteria; duration of renal replacement therapy; serum bilirubin; time to liberation from mechanical ventilation; 28-day, hospital, 3 and 6 month survival; ICU and hospital length-of-stay; and days free from catecholamine therapy. Blood and urine samples will be collected on the day of inclusion, at 24 hours, and on days 4 and 6 post-inclusion for investigation of the mechanisms by which levosimendan might improve organ function. Eighty patients will have additional blood samples taken to measure levels of levosimendan and its active metabolites OR-1896 and OR-1855. A total of 516 patients will be recruited from approximately 25 ICUs in the United Kingdom.
DiscussionThis trial will test the efficacy of levosimendan to reduce acute organ dysfunction in adult patients who have septic shock and evaluate its biological mechanisms of action.
Resumo:
The use of handheld near infrared (NIR) instrumentation, as a tool for rapid analysis, has the potential to be used widely in the animal feed sector. A comparison was made between handheld NIR and benchtop instruments in terms of proximate analysis of poultry feed using off-the-shelf calibration models and including statistical analysis. Additionally, melamine adulterated soya bean products were used to develop qualitative and quantitative calibration models from the NIRS spectral data with excellent calibration models and prediction statistics obtained. With regards to the quantitative approach, the coefficients of determination (R2) were found to be 0.94-0.99 with the corresponding values for the root mean square error of calibration and prediction were found to be 0.081-0.215 % and 0.095-0.288 % respectively. In addition, cross validation was used to further validate the models with the root mean square error of cross validation found to be 0.101-0.212 %. Furthermore, by adopting a qualitative approach with the spectral data and applying Principal Component Analysis, it was possible to discriminate between adulterated and pure samples.
Resumo:
BACKGROUND: Epidemiological and laboratory studies suggest that β-blockers may reduce cancer progression in various cancer sites. The aim of this study was to conduct the first epidemiological investigation of the effect of post-diagnostic β-blocker usage on colorectal cancer-specific mortality in a large population-based colorectal cancer patient cohort.
PATIENTS AND METHODS: A nested case-control analysis was conducted within a cohort of 4794 colorectal cancer patients diagnosed between 1998 and 2007. Patients were identified from the UK Clinical Practice Research Datalink and confirmed using cancer registry data. Patients with a colorectal cancer- specific death (data from the Office of National Statistics death registration system) were matched to five controls. Conditional logistic regression was applied to calculate odds ratios (OR) and 95% confidence intervals (95% CIs) according to β-blocker usage (data from GP-prescribing records).
RESULTS: Post-diagnostic β-blocker use was identified in 21.4% of 1559 colorectal cancer-specific deaths and 23.7% of their 7531 matched controls, with little evidence of an association (OR = 0.89 95% CI 0.78-1.02). Similar associations were found when analysing drug frequency, β-blocker type or specific drugs such as propranolol. There was some evidence of a weak reduction in all-cause mortality in β-blocker users (adjusted OR = 0.88; 95% CI 0.77-1.00; P = 0.04) which was in part due to the marked effect of atenolol on cardiovascular mortality (adjusted OR = 0.62; 95% CI 0.40-0.97; P = 0.04).
CONCLUSIONS: In this novel, large UK population-based cohort of colorectal cancer patients, there was no evidence of an association between post-diagnostic β-blocker use and colorectal cancer-specific mortality.
CLINICAL TRIALS NUMBER: NCT00888797.
Resumo:
PURPOSE: To investigate whether statins used after colorectal cancer diagnosis reduce the risk of colorectal cancer-specific mortality in a cohort of patients with colorectal cancer.
PATIENTS AND METHODS: A cohort of 7,657 patients with newly diagnosed stage I to III colorectal cancer were identified from 1998 to 2009 from the National Cancer Data Repository (comprising English cancer registry data). This cohort was linked to the United Kingdom Clinical Practice Research Datalink, which provided prescription records, and to mortality data from the Office of National Statistics (up to 2012) to identify 1,647 colorectal cancer-specific deaths. Time-dependent Cox regression models were used to calculate hazard ratios (HR) for cancer-specific mortality and 95% CIs by postdiagnostic statin use and to adjust these HRs for potential confounders.
RESULTS: Overall, statin use after a diagnosis of colorectal cancer was associated with reduced colorectal cancer-specific mortality (fully adjusted HR, 0.71; 95% CI, 0.61 to 0.84). A dose-response association was apparent; for example, a more marked reduction was apparent in colorectal cancer patients using statins for more than 1 year (adjusted HR, 0.64; 95% CI, 0.53 to 0.79). A reduction in all-cause mortality was also apparent in statin users after colorectal cancer diagnosis (fully adjusted HR, 0.75; 95% CI, 0.66 to 0.84).
CONCLUSION: In this large population-based cohort, statin use after diagnosis of colorectal cancer was associated with longer rates of survival.
Resumo:
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.
Resumo:
BACKGROUND: The wingless-type MMTV integration site (Wnt) signaling is a group of signal transduction pathways. In canonical Wnt pathway, Wnt ligands bind to low-density lipoprotein receptor-related protein 5 or 6 (LRP5 or LRP6), resulting in phosphorylation and activation of the receptor. We hypothesize that canonical Wnt pathway plays a role in the retinal lesion of age-related macular degeneration (AMD), a leading cause of irreversible central visual loss in elderly.
METHODS: We examined LRP6 phosphorylation and Wnt signaling cascade in human retinal sections and plasma kallistatin, an endogenous inhibitor of the Wnt pathway in AMD patients and non-AMD subjects. We also used the Ccl2 (-/-) /Cx3cr1 (-/-) /rd8 and Ccl2 (-/-) /Cx3cr1 (gfp/gfp) mouse models with AMD-like retinal degeneration to further explore the involvement of Wnt signaling activation in the retinal lesions in those models and to preclinically evaluate the role of Wnt signaling suppression as a potential therapeutic option for AMD.
RESULTS: We found higher levels of LRP6 (a key Wnt signaling receptor) protein phosphorylation and transcripts of the Wnt pathway-targeted genes, as well as higher beta-catenin protein in AMD macula compared to controls. Kallistatin was decreased in the plasma of AMD patients. Retinal non-phosphorylated-β-catenin and phosphorylated-LRP6 were higher in Ccl2 (-/-) /Cx3cr1 (-/-) /rd8 mice than that in wild type. Intravitreal administration of an anti-LRP6 antibody slowed the progression of retinal lesions in Ccl2 (-/-) /Cx3cr1 (-/-) /rd8 and Ccl2 (-/-) /Cx3cr1 (gfp/gfp) mice. Electroretinography of treated eyes exhibited larger amplitudes compared to controls in both mouse models. A2E, a retinoid byproduct associated with AMD was lower in the treated eyes of Ccl2 (-/-) /Cx3cr1 (-/-) /rd8 mice. Anti-LRP6 also suppressed the expression of Tnf-α and Icam-1 in Ccl2 (-/-) /Cx3cr1 (-/-) /rd8 retinas.
CONCLUSIONS: Wnt signaling may be disturbed in AMD patients, which could contribute to the retinal inflammation and increased A2E levels found in AMD. Aberrant activation of canonical Wnt signaling might also contribute to the focal retinal degenerative lesions of mouse models with Ccl2 and Cx3cr1 deficiency, and intravitreal administration of anti-LRP6 antibody could be beneficial by deactivating the canonical Wnt pathway.
Resumo:
BACKGROUND: Lapatinib plus capecitabine emerged as an efficacious therapy in metastatic breast cancer (mBC). We aimed to identify germline single-nucleotide polymorphisms (SNPs) in genes involved in capecitabine catabolism and human epidermal receptor signaling that were associated with clinical outcome to assist in selecting patients likely to benefit from this combination.
PATIENTS AND METHODS: DNA was extracted from 240 of 399 patients enrolled in EGF100151 clinical trial (NCT00078572; clinicaltrials.gov) and SNPs were successfully evaluated in 234 patients. The associations between SNPs and clinical outcome were analyzed using Fisher's exact test, Kaplan-Meier curves, log-rank tests, likelihood ratio test within logistic or Cox regression model, as appropriate.
RESULTS: There were significant interactions between CCND1 A870G and clinical outcome. Patients carrying the A-allele were more likely to benefit from lapatinib plus capecitabine versus capecitabine when compared with patients harboring G/G (P = 0.022, 0.024 and 0.04, respectively). In patients with the A-allele, the response rate (RR) was significantly higher with lapatinib plus capecitabine (35%) compared with capecitabine (11%; P = 0.001) but not between treatments in patients with G/G (RR = 24% and 32%, respectively; P = 0.85). Time to tumor progression (TTP) was longer in patients with the A-allele treated with lapatinib plus capecitabine compared with capecitabine (median TTP = 7.9 and 3.4 months; P < 0.001), but not in patients with G/G (median TTP = 6.1 and 6.6 months; P = 0.92).
CONCLUSION: Our findings suggest that CCND1A870G may be useful in predicting clinical outcome in HER2-positive mBC patients treated with lapatinib plus capecitabine.
Resumo:
OBJECTIVE/BACKGROUND: Many associations between abdominal aortic aneurysm (AAA) and genetic polymorphisms have been reported. It is unclear which are genuine and which may be caused by type 1 errors, biases, and flexible study design. The objectives of the study were to identify associations supported by current evidence and to investigate the effect of study design on reporting associations.
METHODS: Data sources were MEDLINE, Embase, and Web of Science. Reports were dual-reviewed for relevance and inclusion against predefined criteria (studies of genetic polymorphisms and AAA risk). Study characteristics and data were extracted using an agreed tool and reports assessed for quality. Heterogeneity was assessed using I(2) and fixed- and random-effects meta-analyses were conducted for variants that were reported at least twice, if any had reported an association. Strength of evidence was assessed using a standard guideline.
RESULTS: Searches identified 467 unique articles, of which 97 were included. Of 97 studies, 63 reported at least one association. Of 92 studies that conducted multiple tests, only 27% corrected their analyses. In total, 263 genes were investigated, and associations were reported in polymorphisms in 87 genes. Associations in CDKN2BAS, SORT1, LRP1, IL6R, MMP3, AGTR1, ACE, and APOA1 were supported by meta-analyses.
CONCLUSION: Uncorrected multiple testing and flexible study design (particularly testing many inheritance models and subgroups, and failure to check for Hardy-Weinberg equilibrium) contributed to apparently false associations being reported. Heterogeneity, possibly due to the case mix, geographical, temporal, and environmental variation between different studies, was evident. Polymorphisms in nine genes had strong or moderate support on the basis of the literature at this time. Suggestions are made for improving AAA genetics study design and conduct.
Resumo:
Esta tese descreve uma framework de trabalho assente no paradigma multi-camada para analisar, modelar, projectar e optimizar sistemas de comunicação. Nela se explora uma nova perspectiva acerca da camada física que nasce das relações entre a teoria de informação, estimação, métodos probabilísticos, teoria da comunicação e codificação. Esta framework conduz a métodos de projecto para a próxima geração de sistemas de comunicação de alto débito. Além disso, a tese explora várias técnicas de camada de acesso com base na relação entre atraso e débito para o projeto de redes sem fio tolerantes a atrasos. Alguns resultados fundamentais sobre a interação entre a teoria da informação e teoria da estimação conduzem a propostas de um paradigma alternativo para a análise, projecto e optimização de sistemas de comunicação. Com base em estudos sobre a relação entre a informação recíproca e MMSE, a abordagem descrita na tese permite ultrapassar, de forma inovadora, as dificuldades inerentes à optimização das taxas de transmissão de informação confiáveis em sistemas de comunicação, e permite a exploração da atribuição óptima de potência e estruturas óptimas de pre-codificação para diferentes modelos de canal: com fios, sem fios e ópticos. A tese aborda também o problema do atraso, numa tentativa de responder a questões levantadas pela enorme procura de débitos elevados em sistemas de comunicação. Isso é feito através da proposta de novos modelos para sistemas com codificação de rede (network coding) em camadas acima da sua camada física. Em particular, aborda-se a utilização de sistemas de codificação em rede para canais que variam no tempo e são sensíveis a atrasos. Isso foi demonstrado através da proposta de um novo modelo e esquema adaptativo, cujos algoritmos foram aplicados a sistemas sem fios com desvanecimento (fading) complexo, de que são exemplos os sistemas de comunicação via satélite. A tese aborda ainda o uso de sistemas de codificação de rede em cenários de transferência (handover) exigentes. Isso é feito através da proposta de novos modelos de transmissão WiFi IEEE 801.11 MAC, que são comparados com codificação de rede, e que se demonstram possibilitar transferência sem descontinuidades. Pode assim dizer-se que esta tese, através de trabalho de análise e de propostas suportadas por simulações, defende que na concepção de sistemas de comunicação se devem considerar estratégias de transmissão e codificação que sejam não só próximas da capacidade dos canais, mas também tolerantes a atrasos, e que tais estratégias têm de ser concebidas tendo em vista características do canal e a camada física.
Resumo:
This paper describes the various Geofencing Components and Existing Models in terms of their Information Security Control Attribute Profiles. The profiles will dictate the security attributes that should accompany each and every Geofencing Model used for Wi-Fi network security control in an organization, thus minimizing the likelihood of malfunctioning security controls. Although it is up to an organization to investigate the best way of implementing information security for itself, by looking at the related models that have been used in the past this paper will present models commonly used to implement information security controls in the organizations. Our findings will highlight the strengths and weaknesses of the various models and present what our experiment and prototype consider as a robust Geofencing Security Model for securing Wi-Fi Networks