998 resultados para phasor data concentrator (PDC)
Resumo:
Background: Findings from the phase 3 FLEX study showed that the addition of cetuximab to cisplatin and vinorelbine significantly improved overall survival, compared with cisplatin and vinorelbine alone, in the first-line treatment of EGFR-expressing, advanced non-small-cell lung cancer (NSCLC). We investigated whether candidate biomarkers were predictive for the efficacy of chemotherapy plus cetuximab in this setting. Methods: Genomic DNA extracted from formalin-fixed paraffin-embedded (FFPE) tumour tissue of patients enrolled in the FLEX study was screened for KRAS codon 12 and 13 and EGFR kinase domain mutations with PCR-based assays. In FFPE tissue sections, EGFR copy number was assessed by dual-colour fluorescence in-situ hybridisation and PTEN expression by immunohistochemistry. Treatment outcome was investigated according to biomarker status in all available samples from patients in the intention-to-treat population. The primary endpoint in the FLEX study was overall survival. The FLEX study, which is ongoing but not recruiting participants, is registered with ClinicalTrials.gov, number NCT00148798. Findings: KRAS mutations were detected in 75 of 395 (19%) tumours and activating EGFR mutations in 64 of 436 (15%). EGFR copy number was scored as increased in 102 of 279 (37%) tumours and PTEN expression as negative in 107 of 303 (35%). Comparisons of treatment outcome between the two groups (chemotherapy plus cetuximab vs chemotherapy alone) according to biomarker status provided no indication that these biomarkers were of predictive value. Activating EGFR mutations were identified as indicators of good prognosis, with patients in both treatment groups whose tumours carried such mutations having improved survival compared with those whose tumours did not (chemotherapy plus cetuximab: median 17·5 months [95% CI 11·7-23·4] vs 8·5 months [7·1-10·8], hazard ratio [HR] 0·52 [0·32-0·84], p=0·0063; chemotherapy alone: 23·8 months [15·2-not reached] vs 10·0 months [8·7-11·0], HR 0·35 [0·21-0·59], p<0·0001). Expression of PTEN seemed to be a potential indicator of good prognosis, with patients whose tumours expressed PTEN having improved survival compared with those whose tumours did not, although this finding was not significant (chemotherapy plus cetuximab: median 11·4 months [8·6-13·6] vs 6·8 months [5·9-12·7], HR 0·80 [0·55-1·16], p=0·24; chemotherapy alone: 11·0 months [9·2-12·6] vs 9·3 months [7·6-11·9], HR 0·77 [0·54-1·10], p=0·16). Interpretation: The efficacy of chemotherapy plus cetuximab in the first-line treatment of advanced NSCLC seems to be independent of each of the biomarkers assessed. Funding: Merck KGaA. © 2011 Elsevier Ltd.
Resumo:
Within the QUT Business School (QUTBS)– researchers across economics, finance and accounting depend on data driven research. They analyze historic and global financial data across a range of instruments to understand the relationships and effects between them as they respond to news and events in their region. Scholars and Higher Degree Research Students in turn seek out universities which offer these particular datasets to further their research. This involves downloading and manipulating large datasets, often with a focus on depth of detail, frequency and long tail historical data. This is stock exchange data and has potential commercial value therefore the license for access tends to be very expensive. This poster reports the following findings: •The library has a part to play in freeing up researchers from the burden of negotiating subscriptions, fundraising and managing the legal requirements around license and access. •The role of the library is to communicate the nature and potential of these complex resources across the university to disciplines as diverse as Mathematics, Health, Information Systems and Creative Industries. •Has demonstrated clear concrete support for research by QUT Library and built relationships into faculty. It has made data available to all researchers and attracted new HDRs. The aim is to reach the output threshold of research outputs to submit into FOR Code 1502 (Banking, Finance and Investment) for ERA 2015. •It is difficult to identify what subset of dataset will be obtained given somewhat vague price tiers. •The integrity of data is variable as it is limited by the way it is collected, this occasionally raises issues for researchers(Cook, Campbell, & Kelly, 2012) •Improved library understanding of the content of our products and the nature of financial based research is a necessary part of the service.
Resumo:
The FLEX study demonstrated that the addition of cetuximab to chemotherapy significantly improved overall survival in the first-line treatment of patients with advanced non-small cell lung cancer (NSCLC). In the FLEX intention to treat (ITT) population, we investigated the prognostic significance of particular baseline characteristics. Individual patient data from the treatment arms of the ITT population of the FLEX study were combined. Univariable and multivariable Cox regression models were used to investigate variables with potential prognostic value. The ITT population comprised 1125 patients. In the univariable analysis, longer median survival times were apparent for females compared with males (12.7 vs 9.3 months); patients with an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0 compared with 1 compared with 2 (13.5 vs 10.6 vs 5.9 months); never smokers compared with former smokers compared with current smokers (14.6 vs 11.1 vs 9.0); Asians compared with Caucasians (19.5 vs 9.6 months); patients with adenocarcinoma compared with squamous cell carcinoma (12.4 vs 9.3 months) and those with metastases to one site compared with two sites compared with three or more sites (12.4 months vs 9.8 months vs 6.4 months). Age (<65 vs ≥65 years), tumor stage (IIIB with pleural effusion vs IV) and percentage of tumor cells expressing EGFR (<40% vs ≥40%) were not identified as possible prognostic factors in relation to survival time. In multivariable analysis, a stepwise selection procedure identified age (<65 vs ≥65 years), gender, ECOG PS, smoking status, region, tumor histology, and number of organs involved as independent factors of prognostic value. In summary, in patients with advanced NSCLC enrolled in the FLEX study, and consistent with previous analyses, particular patient and disease characteristics at baseline were shown to be independent factors of prognostic value. The FLEX study is registered with ClinicalTrials.gov, number NCT00148798. © 2012 Elsevier Ireland Ltd.
Resumo:
Background: Findings from the phase 3 First-Line ErbituX in lung cancer (FLEX) study showed that the addition of cetuximab to first-line chemotherapy significantly improved overall survival compared with chemotherapy alone (hazard ratio [HR] 0·871, 95% CI 0·762-0·996; p=0·044) in patients with advanced non-small-cell lung cancer (NSCLC). To define patients benefiting most from cetuximab, we studied the association of tumour EGFR expression level with clinical outcome in FLEX study patients. Methods: We used prospectively collected tumour EGFR expression data to generate an immunohistochemistry score for FLEX study patients on a continuous scale of 0-300. We used response data to select an outcome-based discriminatory threshold immunohistochemistry score for EGFR expression of 200. Treatment outcome was analysed in patients with low (immunohistochemistry score <200) and high (≥200) tumour EGFR expression. The primary endpoint in the FLEX study was overall survival. We analysed patients from the FLEX intention-to-treat (ITT) population. The FLEX study is registered with ClinicalTrials.gov, number NCT00148798. Findings: Tumour EGFR immunohistochemistry data were available for 1121 of 1125 (99·6%) patients from the FLEX study ITT population. High EGFR expression was scored for 345 (31%) evaluable patients and low for 776 (69%) patients. For patients in the high EGFR expression group, overall survival was longer in the chemotherapy plus cetuximab group than in the chemotherapy alone group (median 12·0 months [95% CI 10·2-15·2] vs 9·6 months [7·6-10·6]; HR 0·73, 0·58-0·93; p=0·011), with no meaningful increase in side-effects. We recorded no corresponding survival benefit for patients in the low EGFR expression group (median 9·8 months [8·9-12·2] vs 10·3 months [9·2-11·5]; HR 0·99, 0·84-1·16; p=0·88). A treatment interaction test assessing the difference in the HRs for overall survival between the EGFR expression groups suggested a predictive value for EGFR expression (p=0·044). Interpretation: High EGFR expression is a tumour biomarker that can predict survival benefit from the addition of cetuximab to first-line chemotherapy in patients with advanced NSCLC. Assessment of EGFR expression could offer a personalised treatment approach in this setting. Funding: Merck KGaA. © 2012 Elsevier Ltd.
Resumo:
Background The expansion of cell colonies is driven by a delicate balance of several mechanisms including cell motility, cell-to-cell adhesion and cell proliferation. New approaches that can be used to independently identify and quantify the role of each mechanism will help us understand how each mechanism contributes to the expansion process. Standard mathematical modelling approaches to describe such cell colony expansion typically neglect cell-to-cell adhesion, despite the fact that cell-to-cell adhesion is thought to play an important role. Results We use a combined experimental and mathematical modelling approach to determine the cell diffusivity, D, cell-to-cell adhesion strength, q, and cell proliferation rate, ?, in an expanding colony of MM127 melanoma cells. Using a circular barrier assay, we extract several types of experimental data and use a mathematical model to independently estimate D, q and ?. In our first set of experiments, we suppress cell proliferation and analyse three different types of data to estimate D and q. We find that standard types of data, such as the area enclosed by the leading edge of the expanding colony and more detailed cell density profiles throughout the expanding colony, does not provide sufficient information to uniquely identify D and q. We find that additional data relating to the degree of cell-to-cell clustering is required to provide independent estimates of q, and in turn D. In our second set of experiments, where proliferation is not suppressed, we use data describing temporal changes in cell density to determine the cell proliferation rate. In summary, we find that our experiments are best described using the range D = 161 - 243 ?m2 hour-1, q = 0.3 - 0.5 (low to moderate strength) and ? = 0.0305 - 0.0398 hour-1, and with these parameters we can accurately predict the temporal variations in the spatial extent and cell density profile throughout the expanding melanoma cell colony. Conclusions Our systematic approach to identify the cell diffusivity, cell-to-cell adhesion strength and cell proliferation rate highlights the importance of integrating multiple types of data to accurately quantify the factors influencing the spatial expansion of melanoma cell colonies.
Resumo:
The use of hedonic models to estimate the effects of various factors on house prices is well established. This paper examines a number of international hedonic house price models that seek to quantify the effect of infrastructure charges on new house prices. This work is an important factor in the housing affordability debate, with many governments in high growth areas having user-pays infrastructure charging policies operating in tandem with housing affordability objectives, with no empirical evidence on the impact of one on the other. This research finds there is little consistency between existing models and the data sets utilised. Specification appears dependent upon data availability rather than sound theoretical grounding. This may lead to a lack of external validity with model specification dependent upon data availability rather than sound theoretical grounding.
Resumo:
Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.
Resumo:
This thesis describes the development of a robust and novel prototype to address the data quality problems that relate to the dimension of outlier data. It thoroughly investigates the associated problems with regards to detecting, assessing and determining the severity of the problem of outlier data; and proposes granule-mining based alternative techniques to significantly improve the effectiveness of mining and assessing outlier data.
Resumo:
Acoustic sensing is a promising approach to scaling faunal biodiversity monitoring. Scaling the analysis of audio collected by acoustic sensors is a big data problem. Standard approaches for dealing with big acoustic data include automated recognition and crowd based analysis. Automatic methods are fast at processing but hard to rigorously design, whilst manual methods are accurate but slow at processing. In particular, manual methods of acoustic data analysis are constrained by a 1:1 time relationship between the data and its analysts. This constraint is the inherent need to listen to the audio data. This paper demonstrates how the efficiency of crowd sourced sound analysis can be increased by an order of magnitude through the visual inspection of audio visualized as spectrograms. Experimental data suggests that an analysis speedup of 12× is obtainable for suitable types of acoustic analysis, given that only spectrograms are shown.
Resumo:
Between 2001 and 2005, the US airline industry faced financial turmoil while the European airline industry entered a period of substantive deregulation. Consequently, this opened up opportunities for low-cost carriers to become more competitive in the market. To assess airline performance and identify the sources of efficiency in the immediate aftermath of these events, we employ a bootstrap data envelopment analysis truncated regression approach. The results suggest that at the time the mainstream airlines needed to significantly reorganize and rescale their operations to remain competitive. In the second-stage analysis, the results indicate that private ownership, status as a low-cost carrier, and improvements in weight load contributed to better organizational efficiency.
Resumo:
Digital human modeling (DHM) systems underwent significant development within the last years. They achieved constantly growing importance in the field of ergonomic workplace design, product development, product usability, ergonomic research, ergonomic education, audiovisual marketing and the entertainment industry. They help to design ergonomic products as well as healthy and safe socio-technical work systems. In the domain of scientific DHM systems, no industry specific standard interfaces are defined which could facilitate the exchange of 3D solid body data, anthropometric data or motion data. The focus of this article is to provide an overview of requirements for a reliable data exchange between different DHM systems in order to identify suitable file formats. Examples from the literature are discussed in detail. Methods: As a first step a literature review is conducted on existing studies and file formats for exchanging data between different DHM systems. The found file formats can be structured into different categories: static 3D solid body data exchange, anthropometric data exchange, motion data exchange and comprehensive data exchange. Each file format is discussed and advantages as well as disadvantages for the DHM context are pointed out. Case studies are furthermore presented, which show first approaches to exchange data between DHM systems. Lessons learnt are shortly summarized. Results: A selection of suitable file formats for data exchange between DHM systems is determined from the literature review.
Resumo:
Exposure control or case-control methodologies are common techniques for estimating crash risks, however they require either observational data on control cases or exogenous exposure data, such as vehicle-kilometres travelled. This study proposes an alternative methodology for estimating crash risk of road user groups, whilst controlling for exposure under a variety of roadway, traffic and environmental factors by using readily available police-reported crash data. In particular, the proposed method employs a combination of a log-linear model and quasi-induced exposure technique to identify significant interactions among a range of roadway, environmental and traffic conditions to estimate associated crash risks. The proposed methodology is illustrated using a set of police-reported crash data from January 2004 to June 2009 on roadways in Queensland, Australia. Exposure-controlled crash risks of motorcyclists—involved in multi-vehicle crashes at intersections—were estimated under various combinations of variables like posted speed limit, intersection control type, intersection configuration, and lighting condition. Results show that the crash risk of motorcycles at three-legged intersections is high if the posted speed limits along the approaches are greater than 60 km/h. The crash risk at three-legged intersections is also high when they are unsignalized. Dark lighting conditions appear to increase the crash risk of motorcycles at signalized intersections, but the problem of night time conspicuity of motorcyclists at intersections is lessened on approaches with lower speed limits. This study demonstrates that this combined methodology is a promising tool for gaining new insights into the crash risks of road user groups, and is transferrable to other road users.
Resumo:
As support grows for greater access to information and data held by governments, so does awareness of the need for appropriate policy, technical and legal frameworks to achieve the desired economic and societal outcomes. Since the late 2000s numerous international organizations, inter-governmental bodies and governments have issued open government data policies, which set out key principles underpinning access to, and the release and reuse of data. These policies reiterate the value of government data and establish the default position that it should be openly accessible to the public under transparent and non-discriminatory conditions, which are conducive to innovative reuse of the data. A key principle stated in open government data policies is that legal rights in government information must be exercised in a manner that is consistent with and supports the open accessibility and reusability of the data. In particular, where government information and data is protected by copyright, access should be provided under licensing terms which clearly permit its reuse and dissemination. This principle has been further developed in the policies issued by Australian Governments into a specific requirement that Government agencies are to apply the Creative Commons Attribution licence (CC BY) as the default licensing position when releasing government information and data. A wide-ranging survey of the practices of Australian Government agencies in managing their information and data, commissioned by the Office of the Australian Information Commissioner in 2012, provides valuable insights into progress towards the achievement of open government policy objectives and the adoption of open licensing practices. The survey results indicate that Australian Government agencies are embracing open access and a proactive disclosure culture and that open licensing under Creative Commons licences is increasingly prevalent. However, the finding that ‘[t]he default position of open access licensing is not clearly or robustly stated, nor properly reflected in the practice of Government agencies’ points to the need to further develop the policy framework and the principles governing information access and reuse, and to provide practical guidance tools on open licensing if the broadest range of government information and data is to be made available for innovative reuse.
Resumo:
This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution