9 resultados para Local Computer Network
em Brock University, Canada
Resumo:
A number of frameworks have been suggested for online retailing, but still there exists little consensus among researchers and practitioners regarding the appropriate amount of information critical and essential to the improvement of customers' satisfaction and their purchase intention. Against this backdrop, this study contributes to the current practical and theoretical discussions and conversations about how information search and perceived risk theories can be applied to the management of online retailer website features. This paper examines the moderating role of website personalization in studying the relationship between information content provided on the top US retailers' websites, and customer satisfaction and purchase intention. The study also explores the role played by customer satisfaction and purchase intention in studying the relationship between information that is personalized to the needs of individual customers and online retailers' sales performance. Results indicate that the extent of information content features presented to online customers alone is not enough for companies looking to satisfy and motivate customers to purchase. However, information that is targeted to an individual customer influences customer satisfaction and purchase intention, and customer satisfaction in tum serves as a driver to the retailer's online sales performance.
Resumo:
This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.
Resumo:
This work consists of a theoretical part and an experimental one. The first part provides a simple treatment of the celebrated von Neumann minimax theorem as formulated by Nikaid6 and Sion. It also discusses its relationships with fundamental theorems of convex analysis. The second part is about externality in sponsored search auctions. It shows that in these auctions, advertisers have externality effects on each other which influence their bidding behavior. It proposes Hal R.Varian model and shows how adding externality to this model will affect its properties. In order to have a better understanding of the interaction among advertisers in on-line auctions, it studies the structure of the Google advertisements networ.k and shows that it is a small-world scale-free network.
Resumo:
A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.
Resumo:
The main focus of this thesis is to evaluate and compare Hyperbalilearning algorithm (HBL) to other learning algorithms. In this work HBL is compared to feed forward artificial neural networks using back propagation learning, K-nearest neighbor and 103 algorithms. In order to evaluate the similarity of these algorithms, we carried out three experiments using nine benchmark data sets from UCI machine learning repository. The first experiment compares HBL to other algorithms when sample size of dataset is changing. The second experiment compares HBL to other algorithms when dimensionality of data changes. The last experiment compares HBL to other algorithms according to the level of agreement to data target values. Our observations in general showed, considering classification accuracy as a measure, HBL is performing as good as most ANn variants. Additionally, we also deduced that HBL.:s classification accuracy outperforms 103's and K-nearest neighbour's for the selected data sets.
Resumo:
While service-learning is often said to be beneficial for all those involved—students, community members, higher education institutions, and faculty members—there are relatively few studies of the attraction to, and effect of, service-learning on faculty members. Existing studies have tended to use a survey design, and to be based in the United States. There is a lack of information on faculty experiences with service-learning in Ontario or Canada. This qualitative case study of faculty experiences with service-learning was framed through an Appreciative Inquiry social constructionist approach. The data were drawn from interviews with 18 faculty members who belong to a Food Security Research Network (FSRN) at a university in northern Ontario, reports submitted by the network, and personal observation of a selection of network-related events. This dissertation study revealed how involvement with service-learning created opportunities for faculty learning and growth. The focus on food security and a commitment to the sustainability of local food production was found to be an ongoing attraction to service-learning and a means to engage in and integrate research and teaching on matters of personal and professional importance to these faculty members. The dissertation concludes with a discussion of the FSRN’s model and the perceived value of a themed, transdisciplinary approach to service-learning. This study highlights promising practices for involving faculty in service-learning and, in keeping with an Appreciative Inquiry approach, depicts a view of faculty work at its best.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Resumo:
The evolving antimicrobial resistance coupled with a recent increase in incidence highlights the importance of reducing gonococcal transmission. Establishing novel risk factors associated with gonorrhea facilitates the development of appropriate prevention and disease control strategies. Sexual Network Analysis (NA), a novel research technique used to further understand sexually transmitted infections, was used to identify network-based risk factors in a defined region in Ontario, Canada experiencing an increase in the incidence of gonorrhea. Linear network structures were identified as important reservoirs of gonococcal transmission. Additionally, a significant association between a central network position and gonorrhea was observed. The central participants were more likely to be younger, report a greater number of risk factors, engage in anonymous sex, have multiple sex partners in the past six months and have sex with the same sex. The network-based risk factors identified through sexual NA, serving as a method of analyzing local surveillance data, support the development of strategies aimed at reducing gonococcal spread.