13 resultados para benchmark

em Brock University, Canada


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The construction of adenovirus vectors for cloning and foreign gene expression requires packaging cell lines that can complement missing viral functions caused by sequence deletions and/or replacement with foreign DNA sequences. In this study, packaging cell lines were designed to provide in trans the missing bovine adenovirus functions, so that recombinant viruses could be generated. Fetal bovine kidney and lUng cells, acquired at the trimester term from a pregnant cow, were tranfected with both digested wild type BAV2 genomic DNA and pCMV-EI. The plasmid pCMV-EI was specifically constructed to express El of BAV2 under the control of the cytomegalovirus enhancer/promoter (CMV). Selection for "true" transformants by continuous passaging showed no success in isolating immortalised cells, since the cells underwent crisis resulting in complete cell death. Moreover, selection for G418 resistance, using the same cells, also did not result in the isolation of an immortalised cell line and the same culture-collapse event was observed. The lack of success in establishing an immortalised cell line from fetal tissue prompted us to transfect a pre-established cell line. We began by transfecting MDBK (Mardin-Dardy bovine kidney) cells with pCMV-El-neo, which contain the bacterial selectable marker neo gene. A series of MDBK-derived cell lines, that constitutively express bovine adenoviral (BAV) early region 1 (El), were then isolated. Cells selected for resistance to the drug G418 were isolated collectively for full characterisation to assess their suitability as packaging cell lines. Individual colonies were isolated by limiting dilution and further tested for El expression and efficiency of DNA uptake. Two cell lines, L-23 and L-24, out of 48 generated foci tested positive for £1 expression using Northern Blot analysis. DNA uptake studies, using both lipofectamine and calcium phosphate methods, were performed to compare these cells, their parental MDBK cells, 8 and the unrelated human 293 cells as a benchmark. The results revealed that the new MDBKderived clones were no more efficient than MDBK cells in the transient expression of transfected DNA and that they were inferior to 293 cells, when using lacZ as the reporter gene. In view of the inherently poor transfection efficiency of MDBK cells and their derivatives, a number of other bovine cells were investigated for their potential as packaging cells. The cell line CCL40 was chosen for its high efficiency in DNA uptake and subsequently transfected with the plasmid vector pCMV El-neo. By selection with the drug G418, two cell lines were isolated, ProCell 1 and ProCell 2. These cell lines were tested for El expression, permissivity to BAV2 and DNA uptake efficiency, revealing a DNA uptake efficiency of 37 % , comparable to that of CCL40. Attempts to rescue BAV2 mutants carrying the lacZ gene in place of £1 or £3 were carried out by co-transfecting wild type viral DNA with either the plasmid pdlElE-Z (which contains BAV2 sequences from 0% to 40.4% with the lacZ gene in place of the £1 region from 1.1% to 8.25%) or with the plasmid pdlE3-5-Z (which contains BAV2 sequences from 64.8% to 100% with the lacZ gene in place of the E3 region from 75.8% to 81.4%). These cotransfections did not result in the generation of a viral mutant. The lack of mutant generation was thought to be caused by the relative inefficiency ofDNA uptake. Consequently, cosBAV2, a cosmid vector carrying the BAV2 genome, was modified to carry the neo reporter gene in place of the £3 region from 75.8% to 81.4%. The use of a single cosmid vector earring the whole genome would eliminate the need for homologous recombination in order to generate a viral vector. Unfortunately, the transfection of cosBAV2- neo also did not result in the generation of a viral mutant. This may have been caused by the size of the £3 deletion, where excess sequences that are essential to the virus' survival might have been deleted. As an extension to this study, the spontaneous E3 deletion, accidently discovered in our viral stock, could be used as site of foreign gene insertion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2007, Barry Bonds hit his 75 6th home run, breaking Hank Aaron's all-time record for most home runs in a Major League career. While it would be expected that such an accomplishment would induce unending praise and adulationfor the new record-holder, Bonds did not receive the treatment typically reserved for a beloved baseball hero. The purpose of this thesis is to assess media representations of the 2007 home run chase in order to shed light upon the factors which led to the mixed representations which accompanied BOlTds ' assault on Aaron's record. Drawingfrom Roland Barthes ' concept of myth, this thesis proposes that Bonds was portrayed in predominantly negative ways because he was seen as failing to embody the values of baseball's mythology. Using a qualitative content analysis of three major American newspapers, this thesis examines portrayals of Bonds and how he was shown both to represent and oppose elements from baseball's mythology, such as youth, and a distant, agrarian past. Recognizing the ways in which baseball is associated with American life, the media representations of Bonds are also evaluated to discern whether he was portrayed as personifYing a distinctly American set of values. The results indicate that, in media coverage of the 2007 home run chase, Bonds was depicted as a player of many contradictions. Most commonly, Bonds' athletic ability and career achievements were contrasted with unflattering descriptions of his character, including discussions of his alleged use of performance-enhancing substances. However, some coverage portrayed Bonds as embodying baseball myth. The findings contribute to an appreciation of the importance of historical context in examining media representations. This understanding is enhanced by an analysis of a selection of articles on Mark McGwire 's record-breaking season in 1998, and careful consideration of, and comparison to, the context under which Bonds performed in 2007. Findings are also shown to support the contemporary existence of a strong American baseball mythology. That Bonds is both condemned for failing to uphold the mythology and praised for personifYing it suggests that the values seen as inherent to baseball continue to act as an American cultural benchmark.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present thesis examines the determinants of the bankruptcy protection duration for Canadian firms. Using a sample of Canadian firms that filed for bankruptcy protection between the calendar years 1992 and 2009, we fmd that the firm age, the industry adjusted operating margin, the default spread, the industrial production growth rate or the interest rate are influential factors on determining the length of the protection period. Older firms tend to stay longer under protection from creditors. As older firms have more complicated structures and issues to settle, the risk of exiting soon the protection (the hazard rate) is small. We also find that firms that perform better than their benchmark as measured by the industry they belong to, tend to leave quickly the bankruptcy protection state. We conclude that the fate of relatively successful companies is determined faster. Moreover, we report that it takes less time to achieve a final solution to firms under bankrupt~y when the default spread is low or when the appetite for risk is high. Conversely, during periods of high default spreads and flight for quality, it takes longer time to resolve the bankruptcy issue. This last finding may suggest that troubled firms should place themselves under protection when spreads are low. However, this ignores the endogeneity issue: high default spread may cause and incidentally reflect higher bankruptcy rates in the economy. Indeed, we find that bankruptcy protection is longer during economic downturns. We explain this relation by the natural increase in default rate among firms (and individuals) during economically troubled times. Default spreads are usually larger during these harsh periods as investors become more risk averse since their wealth shrinks. Using a Log-logistic hazard model, we also fmd that firms that file under the Companies' Creditors Arrangement Act (CCAA) protection spend longer time restructuring than firms that filed under the Bankruptcy and Insolvency Act (BIA). As BIA is more statutory and less flexible, solutions can be reached faster by court orders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emerging markets have received wide attention from investors around the globe because of their return potential and risk diversification. This research examines the selection and timing performance of Canadian mutual funds which invest in fixed-income and equity securities in emerging markets. We use (un)conditional two- and five-factor benchmark models that accommodate the dynamics of returns in emerging markets. We also adopt the cross-sectional bootstrap methodology to distinguish between ‘skill’ and ‘luck’ for individual funds. All the tests are conducted using a comprehensive data set of bond and equity emerging funds over the period of 1989-2011. The risk-adjusted measures of performance are estimated using the least squares method with the Newey-West adjustment for standard errors that are robust to conditional heteroskedasticity and autocorrelation. The performance statistics of the emerging funds before (after) management-related costs are insignificantly positive (significantly negative). They are sensitive to the chosen benchmark model and conditional information improves selection performance. The timing statistics are largely insignificant throughout the sample period and are not sensitive to the benchmark model. Evidence of timing and selecting abilities is obtained in a small number of funds which is not sensitive to the fees structure. We also find evidence that a majority of individual funds provide zero (very few provide positive) abnormal return before fees and a significantly negative return after fees. At the negative end of the tail of performance distribution, our resampling tests fail to reject the role of bad luck in the poor performance of funds and we conclude that most of them are merely ‘unlucky’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research was to examine the ways in which individuals with mental illness create a life of purpose, satisfaction and meaning. The data supported the identification of four common themes: (1) the power of leisure in activation, (2) the power of leisure in resiliency, (3) the power of leisure in identity and (4) the power of leisure in reducing struggle. Through an exploration of the experience of having a mental illness, this project supports that leisure provides therapeutic benefits that transcend through negative life events. In addition, this project highlights the individual nature of recovery as a process of self-discovery. Through the creation of a visual model, this project provides a benchmark for how a small group of individuals have experienced living well with mental illness. As such, this work brings new thought to the growing body of mental health and leisure studies literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a result of mutation in genes, which is a simple change in our DNA, we will have undesirable phenotypes which are known as genetic diseases or disorders. These small changes, which happen frequently, can have extreme results. Understanding and identifying these changes and associating these mutated genes with genetic diseases can play an important role in our health, by making us able to find better diagnosis and therapeutic strategies for these genetic diseases. As a result of years of experiments, there is a vast amount of data regarding human genome and different genetic diseases that they still need to be processed properly to extract useful information. This work is an effort to analyze some useful datasets and to apply different techniques to associate genes with genetic diseases. Two genetic diseases were studied here: Parkinson’s disease and breast cancer. Using genetic programming, we analyzed the complex network around known disease genes of the aforementioned diseases, and based on that we generated a ranking for genes, based on their relevance to these diseases. In order to generate these rankings, centrality measures of all nodes in the complex network surrounding the known disease genes of the given genetic disease were calculated. Using genetic programming, all the nodes were assigned scores based on the similarity of their centrality measures to those of the known disease genes. Obtained results showed that this method is successful at finding these patterns in centrality measures and the highly ranked genes are worthy as good candidate disease genes for being studied. Using standard benchmark tests, we tested our approach against ENDEAVOUR and CIPHER - two well known disease gene ranking frameworks - and we obtained comparable results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.