985 resultados para Alternative genetic decoding
Resumo:
Both clinical practice and clinical research settings can require successive administrations of a memory test, particularly when following the trajectory of suspected memory decline in older adults. However, relatively few verbal episodic memory tests have alternative forms. We set out to create a broad based memory test to allow for the use of an essentially unlimited number of alternative forms. Four tasks for inclusion in such a test were developed. These tasks varied the requirement for recall as opposed to recognition, the need to form an association between unrelated words, and the need to discriminate the most recent list from earlier lists, all of which proved useful. A total of 115 participants completed the battery of tests and were used to show that the test could differentiate between older and younger adults; a sub-sample of 73 participants completed alternative forms of the tests to determine test-retest reliability and the amount of learning to learn.
Resumo:
As a result of rapid urbanisation, population growth, change in lifestyles, pollution and the impacts of climate change, water provision has become a critical challenge for planners and policy-makers. In the wake of increasingly difficult water provision and drought, the notion that freshwater is a finite and vulnerable resources is increasingly being realised. Many city administrations around the World are struggling to provide water security for their residents to maintain lifestyle and economic grouth. This paper review the glocalalternatives to current water sources, including that of desalination, water transfers, recycling, and integrated water management. A comparative study on alternative resources is undertaken and the results are discussed.
Resumo:
While spoken term detection (STD) systems based on word indices provide good accuracy, there are several practical applications where it is infeasible or too costly to employ an LVCSR engine. An STD system is presented, which is designed to incorporate a fast phonetic decoding front-end and be robust to decoding errors whilst still allowing for rapid search speeds. This goal is achieved through mono-phone open-loop decoding coupled with fast hierarchical phone lattice search. Results demonstrate that an STD system that is designed with the constraint of a fast and simple phonetic decoding front-end requires a compromise to be made between search speed and search accuracy.
Resumo:
In the field of semantic grid, QoS-based Web service composition is an important problem. In semantic and service rich environment like semantic grid, the emergence of context constraints on Web services is very common making the composition consider not only QoS properties of Web services, but also inter service dependencies and conflicts which are formed due to the context constraints imposed on Web services. In this paper, we present a repair genetic algorithm, namely minimal-conflict hill-climbing repair genetic algorithm, to address the Web service composition optimization problem in the presence of domain constraints and inter service dependencies and conflicts. Experimental results demonstrate the scalability and effectiveness of the genetic algorithm.
Resumo:
Rapid advancements in the field of genetic science have engendered considerable debate, speculation, misinformation and legislative action worldwide. While programs such as the Human Genome Project bring the prospect of seemingly miraculous medical advancements within imminent reach, they also create the potential for significant invasions of traditional areas of privacy and human dignity through laying the potential foundation for new forms of discrimination in insurance, employment and immigration regulation. The insurance industry, which has of course, traditionally been premised on discrimination as part of its underwriting process, is proving to be the frontline of this regulatory battle with extensive legislation, guidelines and debate marking its progress.
Resumo:
Typically a film producer expects the director and actors to 'do their job' within a scheduled timeframe. Rather than expecting the creative principals to just deliver, a production model can be tailored to help this creative team produce successful outcomes. This research paper contrasts alternative production models with a traditional (or standard) production and presents possibilities for producers to emphasise the collaborative potential for their production.
Resumo:
We analyse the puzzling behavior of the volatility of individual stock returns around the turn of the Millennium. There has been much academic interest in this topic, but no convincing explanation has arisen. Our goal is to pull together the many competing explanations currently proposed in the literature to delermine which, if any, are capable of explaining the volatility trend. We find that many of the different explanations capture the same unusual trend around the Millennium. We find that many of the variables are very highly correlated and it is thus difficult to disentangle their relalive ability to exlplain the time-series behavior in volatility. It seems thai all of the variables that track average volatility well do so mainly by capturing changes in the post-1994 period. These variables have no time-series explanatory power in the pre-1995 years, questioning the underlying idea that any of the explanations currently plesented in the literature can track the trend in volatility over long periods.
Resumo:
This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.
Resumo:
Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.
Resumo:
Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. Cloud computing has become a main medium for Software as a Service (SaaS) providers to host their SaaS as it can provide the scalability a SaaS requires. The challenges in the composite SaaS placement process rely on several factors including the large size of the Cloud network, SaaS competing resource requirements, SaaS interactions between its components and SaaS interactions with its data components. However, existing applications’ placement methods in data centres are not concerned with the placement of the component’s data. In addition, a Cloud network is much larger than data center networks that have been discussed in existing studies. This paper proposes a penalty-based genetic algorithm (GA) to the composite SaaS placement problem in the Cloud. We believe this is the first attempt to the SaaS placement with its data in Cloud provider’s servers. Experimental results demonstrate the feasibility and the scalability of the GA.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
Despite the general evolution and broadening of the scope of the concept of infrastructure in many other sectors, the energy sector has maintained the same narrow boundaries for over 80 years. Energy infrastructure is still generally restricted in meaning to the transmission and distribution networks of electricity and, to some extent, gas. This is especially true in the urban development context. This early 20th century system is struggling to meet community expectations that the industry itself created and fostered for many decades. The relentless growth in demand and changing political, economic and environmental challenges require a shift from the traditional ‘predict and provide’ approach to infrastructure which is no longer economically or environmentally viable. Market deregulation and a raft of demand and supply side management strategies have failed to curb society’s addiction to the commodity of electricity. None of these responses has addressed the fundamental problem. This chapter presents an argument for the need for a new paradigm. Going beyond peripheral energy efficiency measures and the substitution of fossil fuels with renewables, it outlines a new approach to the provision of energy services in the context of 21st century urban environments.