958 resultados para A* search algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Robocup Rescue Simulation System (RCRSS) is a dynamic system of multi-agent interaction, simulating a large-scale urban disaster scenario. Teams of rescue agents are charged with the tasks of minimizing civilian casualties and infrastructure damage while competing against limitations on time, communication, and awareness. This thesis provides the first known attempt of applying Genetic Programming (GP) to the development of behaviours necessary to perform well in the RCRSS. Specifically, this thesis studies the suitability of GP to evolve the operational behaviours required of each type of rescue agent in the RCRSS. The system developed is evaluated in terms of the consistency with which expected solutions are the target of convergence as well as by comparison to previous competition results. The results indicate that GP is capable of converging to some forms of expected behaviour, but that additional evolution in strategizing behaviours must be performed in order to become competitive. An enhancement to the standard GP algorithm is proposed which is shown to simplify the initial search space allowing evolution to occur much quicker. In addition, two forms of population are employed and compared in terms of their apparent effects on the evolution of control structures for intelligent rescue agents. The first is a single population in which each individual is comprised of three distinct trees for the respective control of three types of agents, the second is a set of three co-evolving subpopulations one for each type of agent. Multiple populations of cooperating individuals appear to achieve higher proficiencies in training, but testing on unseen instances raises the issue of overfitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the significant growth of the Internet in recent years, marketers have been striving for new techniques and strategies to prosper in the online world. Statistically, search engines have been the most dominant channels of Internet marketing in recent years. However, the mechanics of advertising in such a market place has created a challenging environment for marketers to position their ads among their competitors. This study uses a unique cross-sectional dataset of the top 500 Internet retailers in North America and hierarchical multiple regression analysis to empirically investigate the effect of keyword competition on the relationship between ad position and its determinants in the sponsored search market. To this end, the study utilizes the literature in consumer search behavior, keyword auction mechanism design, and search advertising performance as the theoretical foundation. This study is the first of its kind to examine the sponsored search market characteristics in a cross-sectional setting where the level of keyword competition is explicitly captured in terms of the number of Internet retailers competing for similar keywords. Internet retailing provides an appropriate setting for this study given the high-stake battle for market share and intense competition for keywords in the sponsored search market place. The findings of this study indicate that bid values and ad relevancy metrics as well as their interaction affect the position of ads on the search engine result pages (SERPs). These results confirm some of the findings from previous studies that examined sponsored search advertising performance at a keyword level. Furthermore, the study finds that the position of ads for web-only retailers is dependent on bid values and ad relevancy metrics, whereas, multi-channel retailers are more reliant on their bid values. This difference between web-only and multi-channel retailers is also observed in the moderating effect of keyword competition on the relationships between ad position and its key determinants. Specifically, this study finds that keyword competition has significant moderating effects only for multi-channel retailers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Responding to a series of articles in sport management literature calling for more diversity in terms of areas of interest or methods, this study warns against the danger of excessively fragmenting this field of research. The works of Kuhn (1962) and Pfeffer (1993) are taken as the basis of an argument that connects convergence with scientific strength. However, being aware of the large number of counterarguments directed at this line of reasoning, a new model of convergence, which focuses on clusters of research contributions with similar areas of interest, methods, and concepts, is proposed. The existence of these clusters is determined with the help of a bibliometric analysis of publications in three sport management journals. This examination determines that there are justified reasons to be concerned about the level of convergence in the field, pointing out to a reduced ability to create large clusters of contributions in similar areas of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Please consult the paper edition of this thesis to read. It is available on the 5th Floor of the Library at Call Number: Z 9999 P65 Y68 1995

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary objective of this research project was to identify prostate cancer (PCa) -specific biomarkers from urine. This was done using a multi-faceted approach that targeted (1) the genome (DNA); (2) the transcriptome (mRNA and miRNA); and (3) the proteome. Toward this end, urine samples were collected from ten healthy individuals, eight men with PCa and twelve men with enlarged, non-cancerous prostates or with Benign Prostatic Hyperplasia (BPH). Urine samples were also collected from the same patients (PCa and BPH) as part of a two-year follow-up. Initially urinary nucleic acids and proteins were assessed both qualitatively and quantitatively for characteristics either unique or common among the groups. Subsequently macromolecules were pooled within each group and assessed for either protein composition via LC-MS/MS or microRNA (miRNA) expression by microarray. A number of potential candidates including miRNAs were identified as being deregulated in either pooled PCa or BPH with respect to the healthy control group. Candidate biomarkers were then assessed among individual samples to validate their utility in diagnosing PCa and/or differentiating PCa from BPH. A number of potential targets including deregulation of miRNAs 1825 and 484, and mRNAs for Fibronectin and Tumor Protein 53 Inducible Nuclear Protein 2 (TP53INP2) appeared to be indicative of PCa. Furthermore, deregulation of miR-498 appeared to be indicative of BPH. The sensitivities and specificities associated with using deregulation in many of these targets to subsequently predict PCa or BPH were also determined. This research project has identified a number of potential targets, detectable in urine, which merit further investigation towards the accurate identification of PCa and its discrimination from BPH. The significance of this work is amplified by the non-invasive nature of the sample source from which these candidates were derived, urine. Many cancer biomarker discovery studies have tended to focus primarily on blood (plasma or serum) and/or tissue samples. This is one of the first PCa biomarker studies to focus exclusively on urine as a sample source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesisâ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Elementary teachers are expected to prepare students to work efficiently with others, solve complex problems and self-regulate their own learning. Considering the importance of a solid educational foundation in the early years, students would benefit if elementary teachers engaged in scholarly teaching. The purpose of this study was to investigate Boyerâs (1990) four dimensions of scholarship, application, integration, teaching and discovery, to better understand if there is scholarly teaching in elementary education. Four professional teaching documents were analyzed using a hermeneutic orientation. A deductive analysis suggests that we do have scholarly teaching in elementary education, with strong evidence that elementary teachers are scholars of application and integration. An inductive analysis of latent and manifest content suggests that underlying humanistic values run deeply through elementary education driving current curricular, instructional and pedagogical practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.