987 resultados para resistant data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In response to the need to leverage private finance and the lack of competition in some parts of the Australian public sector infrastructure market, especially in the very large economic infrastructure sector procured using Pubic Private Partnerships, the Australian Federal government has demonstrated its desire to attract new sources of in-bound foreign direct investment (FDI). This paper aims to report on progress towards an investigation into the determinants of multinational contractors’ willingness to bid for Australian public sector major infrastructure projects. This research deploys Dunning’s eclectic theory for the first time in terms of in-bound FDI by multinational contractors into Australia. Elsewhere, the authors have developed Dunning’s principal hypothesis to suit the context of this research and to address a weakness arising in this hypothesis that is based on a nominal approach to the factors in Dunning's eclectic framework and which fails to speak to the relative explanatory power of these factors. In this paper, a first stage test of the authors' development of Dunning's hypothesis is presented by way of an initial review of secondary data vis-à-vis the selected sector (roads and bridges) in Australia (as the host location) and with respect to four selected home countries (China; Japan; Spain; and US). In doing so, the next stage in the research method concerning sampling and case studies is also further developed and described in this paper. In conclusion, the extent to which the initial review of secondary data suggests the relative importance of the factors in the eclectic framework is considered. It is noted that more robust conclusions are expected following the future planned stages of the research including primary data from the case studies and a global survey of the world’s largest contractors and which is briefly previewed. Finally, and beyond theoretical contributions expected from the overall approach taken to developing and testing Dunning’s framework, other expected contributions concerning research method and practical implications are mentioned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rule-based approach for classifying previously identified medical concepts in the clinical free text into an assertion category is presented. There are six different categories of assertions for the task: Present, Absent, Possible, Conditional, Hypothetical and Not associated with the patient. The assertion classification algorithms were largely based on extending the popular NegEx and Context algorithms. In addition, a health based clinical terminology called SNOMED CT and other publicly available dictionaries were used to classify assertions, which did not fit the NegEx/Context model. The data for this task includes discharge summaries from Partners HealthCare and from Beth Israel Deaconess Medical Centre, as well as discharge summaries and progress notes from University of Pittsburgh Medical Centre. The set consists of 349 discharge reports, each with pairs of ground truth concept and assertion files for system development, and 477 reports for evaluation. The system’s performance on the evaluation data set was 0.83, 0.83 and 0.83 for recall, precision and F1-measure, respectively. Although the rule-based system shows promise, further improvements can be made by incorporating machine learning approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Projects funded by the Australian National Data Service(ANDS). The specific projects that were funded included: a) Greenhouse Gas Emissions Project (N2O) with Prof. Peter Grace from QUT’s Institute of Sustainable Resources. b) Q150 Project for the management of multimedia data collected at Festival events with Prof. Phil Graham from QUT’s Institute of Creative Industries. c) Bio-diversity environmental sensing with Prof. Paul Roe from the QUT Microsoft eResearch Centre. For the purposes of these projects the Eclipse Rich Client Platform (Eclipse RCP) was chosen as an appropriate software development framework within which to develop the respective software. This poster will present a brief overview of the requirements of the projects, an overview of the experiences of the project team in using Eclipse RCP, report on the advantages and disadvantages of using Eclipse and it’s perspective on Eclipse as an integrated tool for supporting future data management requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prostate cancer (CaP) is the most commonly diagnosed cancer in males in Australia, North America, and Europe. If found early and locally confined, CaP can be treated with radical prostatectomy or radiation therapy; however, 25-40% patients will relapse and go on to advanced disease. The most common therapy in these cases is androgen deprivation therapy (ADT), which suppresses androgen production from the testis. Lack of the testicular androgen supply causes cells of the prostate to undergo apoptosis. However, in some cases the regression initially seen with ADT eventually gives way to a growth of a population of cancerous cells that no longer require testicular androgens. This phenotype is essentially fatal and is termed castrate resistant prostate cancer (CRPC). In addition to eventual regression, there are many undesirable side effects which accompany ADT, including development of a metabolic syndrome, which is defined by the U.S. National Library of Medicine as “a combination of medical disorders that increase the risk of developing cardiovascular disease and diabetes.” This project will focus on the effect of ADT induced hyperinsulinemia, as mimicked by treating androgen receptor positive CaP cells with insulin in a serum (hormone) deprived environment. While this side effect is not widely explored, in this thesis it is demonstrated for the first time that insulin upregulates pathways important to CaP progression. Our group has previously shown that during CaP progression, the enzymes necessary for de novo steroidogenesis are upregulated in the LNCaP xenograft model, total steroid levels are increased in tumours compared to pre castrate levels, and de novo steroidogenesis from radio-labelled acetate has been demonstrated. Because of the CaP dependence on AR for survival, we and other groups believe that CaP cells carry out de novo steroidogenesis to survive in androgen deprived conditions. Because (a) men on ADT often develop metabolic syndrome, and (b) men with lifestyle-induced obesity and hyperinsulinemia have worse prognosis and faster disease progression, and because (c) insulin causes steroidogenesis in other cell lines, the hypothesis that insulin may contribute to CaP progression through upregulation of steroidogenesis was explored. Insulin upregulates steroidogenesis enzymes at the mRNA level in three AR positive cell lines, as well as upregulating these enzymes at the protein level in two cell lines. It has also been demonstrated that insulin increases mitochondrial (functional) levels of steroid acute regulatory protein (StAR). Furthermore, insulin causes increased levels of total steroids in and induction of de novo steroid synthesis by insulin has been demonstrated at levels induced sufficient to activate AR. The effect of insulin analogs on CaP steroidogenesis in LNCaP and VCaP cells has also been investigated because epidemiological studies suggest that some of the analogs developed may have more cancer stimulatory effects than normal insulin. In this project, despite the signalling differences between glargine, X10, and insulin, these analogs did not appear to induce steroidogenesis any more potently that normal insulin. The effect of insulin of MCF7breast cancer cells was also investigated with results suggesting that breast cancer cells may be capable of de novo steroidogenesis, and that increase in estradiol production may be exacerbated by insulin. Insulin has also been long known to stimulate lipogenesis in the liver and adipocytes, and has been demonstrated to increase lipogenesis in breast cancer cells; therefore, investigation of the effect of insulin on lipogenesis, which is a hallmark of aggressive cancers, was investigated. In CaP progression sterol regulatory element binding protein (SREBP) is dysregulated and upregulates fatty acid synthase (FASN), acetyl CoA-carboxylase, and other lipogenesis genes. SREBP is important for steroidogenesis and in this project has been shown to be upregulated by insulin in CaP cells. Fatty acid synthesis provides building blocks of membrane growth, provides substrates for acid oxidation, the main energy source for CaP cells, provides building blocks for anti-apoptotic and proinflammatory molecules, and provides molecules that stimulate steroidogenesis. In this project it has been shown that insulin upregulates FASN and ACC, which synthesize fatty acids, as well as upregulating hormone sensitive lipase (HSL), diazepam-binding inhibitor (DBI), and long-chain acyl-CoA synthetase 3 (ACSL3), which contribute to lipid activation of steroidogenesis. Insulin also upregulates total lipid levels and de novo lipogenesis, which can be suppressed by inhibition of the insulin receptor (INSR). The fatty acids synthesized after insulin treatment are those that have been associated with CaP; furthermore, microarray data suggests insulin may upregulate fatty acid biosynthesis, metabolism and arachidonic acid metabolism pathways, which have been implicated in CaP growth and survival. Pharmacological agents used to treat patients with hyperinsulinemia/ hyperlipidemia have gained much interest in regards to CaP risk and treatment; however, the scientific rationale behind these clinical applications has not been examined. This thesis explores whether the use of metformin or simvastatin would decrease either lipogenesis or steroidogenesis or both in CaP cells. Simvastatin is a 3-hydroxy-3-methylglutaryl-CoA reductase (HMGR) inhibitor, which blocks synthesis of cholesterol, the building block of steroids/ androgens. It has also been postulated to down regulate SREBP in other metabolic disorders. It has been shown in this thesis, in LNCaP cells, that simvastatin inhibited and decreased insulin induced steroidogenesis and lipogenesis, respectively, but increased these pathways in the absence of insulin. Conversely, metformin, which activates AMP-activated protein kinase (AMPK) to shut down lipogenesis, cholesterol synthesis, and protein synthesis, highly suppresses both steroidogenesis and lipogenesis in the presence and absence of insulin. Lastly, because it has been demonstrated to increase steroidogenesis in other cell lines, and because the elucidation of any factors affecting steroidogenesis is important to understanding CaP, the effect of IGF2 on steroidogenesis in CaP cells was investigated. In patient samples, as men progress to CRPC, IGF2 mRNA and the protein levels of the receptors it may signal through are upregulated. It has also been demonstrated that IGF2 upregulates steroidogenic enzymes at both the mRNA and protein levels in LNCaP cells, increases intracellular and secreted steroid/androgen levels in LNCaPs to levels sufficient to stimulate the AR, and upregulated de novo steroidogenesis in LNCaPs and VCaPs. As well, inhibition of INSR and insulin-like growth factor 1 receptor (IGF1R), which IGF2 signals through, suggests that induction of steroidogenesis may be occurring predominantly through IGF1R. In summary, this project has illuminated for the first time that insulin is likely to play a large role in cancer progression, through upregulation of the steroidogenesis and lipogenesis pathways at the mRNA and protein levels, and production levels, and demonstrates a novel role for IGF-II in CaP progression through stimulation of steroidogenesis. It has also been demonstrated that metformin and simvastatin drugs may be useful in suppressing the insulin induction of these pathways. This project affirms the pathways by which ADT- induced metabolic syndrome may exacerbate CaP progression and strongly suggests that the monitoring and modulation of the metabolic state of CaP patients could have a strong impact on their therapeutic outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the missing covariate problem which is often encountered in survival analysis. Three covariate imputation methods are employed in the study, and the effectiveness of each method is evaluated within the hazard prediction framework. Data from a typical engineering asset is used in the case study. Covariate values in some time steps are deliberately discarded to generate an incomplete covariate set. It is found that although the mean imputation method is simpler than others for solving missing covariate problems, the results calculated by it can differ largely from the real values of the missing covariates. This study also shows that in general, results obtained from the regression method are more accurate than those of the mean imputation method but at the cost of a higher computational expensive. Gaussian Mixture Model (GMM) method is found to be the most effective method within these three in terms of both computation efficiency and predication accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Typical reference year (TRY) weather data is often used to represent the long term weather pattern for building simulation and design. Through the analysis of ten year historical hourly weather data for seven Australian major capital cities using the frequencies procedure of descriptive statistics analysis (by SPSS software), this paper investigates: • the closeness of the typical reference year (TRY) weather data in representing the long term weather pattern; • the variations and common features that may exist between relatively hot and cold years. It is found that for the given set of input data, in comparison with the other weather elements, the discrepancy between TRY and multiple years is much smaller for the dry bulb temperature, relative humidity and global solar irradiance. The overall distribution patterns of key weather elements are also generally similar between the hot and cold years, but with some shift and/or small distortion. There is little common tendency of change between the hot and the cold years for different weather variables at different study locations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Concerns regarding groundwater contamination with nitrate and the long-term sustainability of groundwater resources have prompted the development of a multi-layered three dimensional (3D) geological model to characterise the aquifer geometry of the Wairau Plain, Marlborough District, New Zealand. The 3D geological model which consists of eight litho-stratigraphic units has been subsequently used to synthesise hydrogeological and hydrogeochemical data for different aquifers in an approach that aims to demonstrate how integration of water chemistry data within the physical framework of a 3D geological model can help to better understand and conceptualise groundwater systems in complex geological settings. Multivariate statistical techniques(e.g. Principal Component Analysis and Hierarchical Cluster Analysis) were applied to groundwater chemistry data to identify hydrochemical facies which are characteristic of distinct evolutionary pathways and a common hydrologic history of groundwaters. Principal Component Analysis on hydrochemical data demonstrated that natural water-rock interactions, redox potential and human agricultural impact are the key controls of groundwater quality in the Wairau Plain. Hierarchical Cluster Analysis revealed distinct hydrochemical water quality groups in the Wairau Plain groundwater system. Visualisation of the results of the multivariate statistical analyses and distribution of groundwater nitrate concentrations in the context of aquifer lithology highlighted the link between groundwater chemistry and the lithology of host aquifers. The methodology followed in this study can be applied in a variety of hydrogeological settings to synthesise geological, hydrogeological and hydrochemical data and present them in a format readily understood by a wide range of stakeholders. This enables a more efficient communication of the results of scientific studies to the wider community.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the course of several natural disasters in recent years, Twitter has been found to play an important role as an additional medium for many–to–many crisis communication. Emergency services are successfully using Twitter to inform the public about current developments, and are increasingly also attempting to source first–hand situational information from Twitter feeds (such as relevant hashtags). The further study of the uses of Twitter during natural disasters relies on the development of flexible and reliable research infrastructure for tracking and analysing Twitter feeds at scale and in close to real time, however. This article outlines two approaches to the development of such infrastructure: one which builds on the readily available open source platform yourTwapperkeeper to provide a low–cost, simple, and basic solution; and, one which establishes a more powerful and flexible framework by drawing on highly scaleable, state–of–the–art technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis provides a query model suitable for context sensitive access to a wide range of distributed linked datasets which are available to scientists using the Internet. The model is designed based on scientific research standards which require scientists to provide replicable methods in their publications. Although there are query models available that provide limited replicability, they do not contextualise the process whereby different scientists select dataset locations based on their trust and physical location. In different contexts, scientists need to perform different data cleaning actions, independent of the overall query, and the model was designed to accommodate this function. The query model was implemented as a prototype web application and its features were verified through its use as the engine behind a major scientific data access site, Bio2RDF.org. The prototype showed that it was possible to have context sensitive behaviour for each of the three mirrors of Bio2RDF.org using a single set of configuration settings. The prototype provided executable query provenance that could be attached to scientific publications to fulfil replicability requirements. The model was designed to make it simple to independently interpret and execute the query provenance documents using context specific profiles, without modifying the original provenance documents. Experiments using the prototype as the data access tool in workflow management systems confirmed that the design of the model made it possible to replicate results in different contexts with minimal additions, and no deletions, to query provenance documents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid growth in the number of users using social networks and the information that a social network requires about their users make the traditional matching systems insufficiently adept at matching users within social networks. This paper introduces the use of clustering to form communities of users and, then, uses these communities to generate matches. Forming communities within a social network helps to reduce the number of users that the matching system needs to consider, and helps to overcome other problems from which social networks suffer, such as the absence of user activities' information about a new user. The proposed system has been evaluated on a dataset obtained from an online dating website. Empirical analysis shows that accuracy of the matching process is increased using the community information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent increases in cycling have led to many media articles highlighting concerns about interactions between cyclists and pedestrians on footpaths and off-road paths. Under the Australian Road Rules, adults are not allowed to ride on footpaths unless accompanying a child 12 years of age or younger. However, this rule does not apply in Queensland. This paper reviews international studies that examine the safety of footpath cycling for both cyclists and pedestrians, and relevant Australian crash and injury data. The results of a survey of more than 2,500 Queensland adult cyclists are presented in terms of the frequency of footpath cycling, the characteristics of those cyclists and the characteristics of self-reported footpath crashes. A third of the respondents reported riding on the footpath and, of those, about two-thirds did so reluctantly. Riding on the footpath was more common for utilitarian trips and for new riders, although the average distance ridden on footpaths was greater for experienced riders. About 5% of distance ridden and a similar percentage of self-reported crashes occurred on footpaths. These data are discussed in terms of the Safe Systems principle of separating road users with vastly different levels of kinetic energy. The paper concludes that footpaths are important facilities for both inexperienced and experienced riders and for utilitarian riding, especially in locations riders consider do not provide a safe system for cycling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we apply a simulation based approach for estimating transmission rates of nosocomial pathogens. In particular, the objective is to infer the transmission rate between colonised health-care practitioners and uncolonised patients (and vice versa) solely from routinely collected incidence data. The method, using approximate Bayesian computation, is substantially less computer intensive and easier to implement than likelihood-based approaches we refer to here. We find through replacing the likelihood with a comparison of an efficient summary statistic between observed and simulated data that little is lost in the precision of estimated transmission rates. Furthermore, we investigate the impact of incorporating uncertainty in previously fixed parameters on the precision of the estimated transmission rates.