11 resultados para Markets in electronic commerce

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to empirically investigate the adoption of retail electronic commerce (REC). REC is a business transaction which takes place over the Internet between a casual consumer and a firm. The consumer has no long-term relationship with the firm, orders a good or service, and pays with a credit card. To date, most REC applications have not been profitable. To build profitable REC applications a better understanding of the system's users is required. ^ The research model hypothesizes that the level of REC buying is dependent upon the Buying Characteristics of Internet Use and Search Experience plus the Channel Characteristics of Beliefs About Internet Vendors and Beliefs About Internet Security. The effect of these factors is modified by Time. Additional research questions ask about the different types of REC buyers, the differences between these groups, and how these groups evolved over time. ^ To answer these research questions I analyzed publicly available data collected over a three-year period by the Georgia Institute of Technology Graphics and Visualization Unit over the Internet. Findings indicate the model best predicts Number of Purchases in a future period, and that Buyer Characteristics are most important to this determination. Further, this model is evolving over Time making Buyer Characteristics predict Number of Purchases better in more recent survey administrations. Buyers clustered into five groups based on level of buying and move through various levels and buy increasing Number of Purchases over time. ^ This is the first large scale research project to investigate the evolution of REC. This implications are significant. Practitioners with casual consumer customers need to deploy a finely tuned REC strategy, understand their buyers, capitalize on the company reputation on the Internet, install an Internet-compatible infrastructure, and web-enable order-entry/inventory/fulliment/shipping applications. Researchers might wish to expand on the Buyer Characteristics of the model and/or explore alternative dependent variables. Further, alternative theories such as Population Ecology or Transaction Cost Economics might further illuminate this new I.S. research domain. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to empirically investigate the adoption of retail electronic commerce (REC). REC is a business transaction which takes place over the Internet between a casual consumer and a firm. The consumer has no long-term relationship with the firm, orders a good or service, and pays with a credit card. To date, most REC applications have not been profitable. To build profitable REC applications a better understanding of the system's users is required. The research model hypothesizes that the level of REC buying is dependent upon the Buying Characteristics of Internet Use and Search Experience plus the Channel Characteristics of Beliefs About Internet Vendors and Beliefs About Internet Security. The effect of these factors is modified by Time. Additional research questions ask about the different types of REC buyers, the differences between these groups, and how these groups evolved over time. To answer these research questions I analyzed publically available data collected over a three-year period by the Georgia Institute of Technology Graphics and Visualization Unit over the Internet. Findings indicate the model best predicts Number of Purchases in a future period, and that Buyer Characteristics are most important to this determination. Further, this model is evolving over Time making Buyer Characteristics predict Number of Purchases better in more recent survey administrations. Buyers clustered into five groups based on level of buying and move through various levels and buy increasing Number of Purchases over time. This is the first large scale research project to investigate the evolution of REC. This implications are significant. Practitioners with casual consumer customers need to deploy a finely tuned REC strategy, understand their buyers, capitalize on the company reputation on the Internet, install an Internet-compatible infrastructure, and web-enable order-entry/inventory/fulfillment/ shipping applications. Researchers might wish to expand on the Buyer Characteristics of the model and/or explore alternative dependent variables. Further, alternative theories such as Population Ecology or Transaction Cost Economics might further illuminate this new I.S. research domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquidity is an important market characteristic for participants in every financial market. One of the three components of liquidity is market depth. Prior literature lacks a comprehensive analysis of depth in U.S. futures markets due to past limitations on the availability of data. However, recent innovations in data collection and dissemination provide new opportunities to investigate the depth dimension of liquidity. In this dissertation, the Chicago Mercantile Exchange (CME) Group proprietary database on depth is employed to study the dynamics of depth in the U.S. futures markets. This database allows for the analysis of depth along the entire limit order book rather than just at the first level. The first essay examines the characteristics of depth within the context of the five-deep limit order book. Results show that a large amount of depth is present in the book beyond the best level. Furthermore, the findings show that the characteristics of five-deep depth between day and night trading vary and that depth is unequal across levels within the limit order book. The second essay examines the link between the five-deep market depth and the bid-ask spread. The results suggest an inverse relation between the spread and the depth after adjusting for control factors. The third essay explores transitory volatility in relation to depth in the limit order book. Evidence supports the relation between an increase in volatility and a subsequent decrease in market depth. Overall, the results of this dissertation are consistent with limit order traders actively managing depth along the limit order book in electronic U.S. futures markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquidity is an important market characteristic for participants in every financial market. One of the three components of liquidity is market depth. Prior literature lacks a comprehensive analysis of depth in U.S. futures markets due to past limitations on the availability of data. However, recent innovations in data collection and dissemination provide new opportunities to investigate the depth dimension of liquidity. In this dissertation, the Chicago Mercantile Exchange (CME) Group proprietary database on depth is employed to study the dynamics of depth in the U.S. futures markets. This database allows for the analysis of depth along the entire limit order book rather than just at the first level. The first essay examines the characteristics of depth within the context of the five-deep limit order book. Results show that a large amount of depth is present in the book beyond the best level. Furthermore, the findings show that the characteristics of five-deep depth between day and night trading vary and that depth is unequal across levels within the limit order book. The second essay examines the link between the five-deep market depth and the bid-ask spread. The results suggest an inverse relation between the spread and the depth after adjusting for control factors. The third essay explores transitory volatility in relation to depth in the limit order book. Evidence supports the relation between an increase in volatility and a subsequent decrease in market depth. Overall, the results of this dissertation are consistent with limit order traders actively managing depth along the limit order book in electronic U.S. futures markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation analyzes how marketers define markets in technology-based industries. One of the most important strategic decisions marketers face is determining the optimal market for their products. Market definition is critical in dynamic high technology markets characterized by high levels of market and technological uncertainty. Building on literature from marketing and related disciplines, this research is the first in-depth study of market definition in industrial markets. Using a national, probability sample stratified by firm size, 1,000 marketing executives in nine industries (automation, biotechnology, computers, medical equipment and instrumentation, pharmaceuticals, photonics, software, subassemblies and components, and telecommunications) were surveyed via a mail questionnaire. A 20.8% net response rate yielding 203 surveys was achieved. The market structure-conduct-performance (SCP) paradigm from industrial organization provided a conceptual basis for testing a causal market definition model via LISREL. A latent exogenous variable (competitive intensity) and four latent endogenous variables (marketing orientation, technological orientation, market definition criteria, and market definition success) were used to develop and test hypothesized relationships among constructs. Research questions relating to market redefinition, market definition characteristics, and internal (within the firm) and external (competitive) market definition were also investigated. Market definition success was found to be positively associated with a marketing orientation and the use of market definition criteria. Technological orientation was not significantly related to market definition success. Customer needs were the key market definition characteristic to high-tech firms (technology, competition, customer groups, and products were also important). Market redefinition based on changing customer needs was the most effective of seven strategies tested. A majority of firms regularly defined their market at the corporate and product-line level within the firm. From a competitive perspective, industry, industry sector, and product-market definitions were used most frequently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prior finance literature lacks a comprehensive analysis of microstructure characteristics of U.S. futures markets due to the lack of data availability. Utilizing a unique data set for five different futures contract this dissertation fills this gap in the finance literature. In three essays price discovery, resiliency and the components of bid-ask spreads in electronic futures markets are examined. In order to provide comprehensive and robust analysis, both moderately volatile pre-crisis and volatile crisis periods are included in the analysis. The first essay entitled “Price Discovery and Liquidity Characteristics for U.S. Electronic Futures and ETF Markets” explores the price discovery process in U.S. futures and ETF markets. Hasbrouck’s information share method is applied to futures and ETF instruments. The information share results show that futures markets dominate the price discovery process. The results on the factors that affect the price discovery process show that when volatility increases, the price leadership of futures markets declines. Furthermore, when the relative size of bid-ask spread in one market increases, its information share decreases. The second essay, entitled “The Resiliency of Large Trades for U.S. Electronic Futures Markets,“ examines the effects of large trades in futures markets. How quickly prices and liquidity recovers after large trades is an important characteristic of financial markets. The price effects of large trades are greater during the crisis period compared to the pre-crisis period. Furthermore, relative to the pre-crisis period, during the crisis period it takes more trades until liquidity returns to the pre-block trade levels. The third essay, entitled “Components of Quoted Bid-Ask Spreads in U.S. Electronic Futures Markets,” investigates the bid-ask spread components in futures market. The components of bid-ask spreads is one of the most important subjects of microstructure studies. Utilizing Huang and Stoll’s (1997) method the third essay of this dissertation provides the first analysis of the components of quoted bid-ask spreads in U.S. electronic futures markets. The results show that order processing cost is the largest component of bid-ask spreads, followed by inventory holding costs. During the crisis period market makers increase bid-ask spreads due to increasing inventory holding and adverse selection risks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation is a collection of three economics essays on different aspects of carbon emission trading markets. The first essay analyzes the dynamic optimal emission control strategies of two nations. With a potential to become the largest buyer under the Kyoto Protocol, the US is assumed to be a monopsony, whereas with a large number of tradable permits on hand Russia is assumed to be a monopoly. Optimal costs of emission control programs are estimated for both the countries under four different market scenarios: non-cooperative no trade, US monopsony, Russia monopoly, and cooperative trading. The US monopsony scenario is found to be the most Pareto cost efficient. The Pareto efficient outcome, however, would require the US to make side payments to Russia, which will even out the differences in the cost savings from cooperative behavior. The second essay analyzes the price dynamics of the Chicago Climate Exchange (CCX), a voluntary emissions trading market. By examining the volatility in market returns using AR-GARCH and Markov switching models, the study associates the market price fluctuations with two different political regimes of the US government. Further, the study also identifies a high volatility in the returns few months before the market collapse. Three possible regulatory and market-based forces are identified as probable causes of market volatility and its ultimate collapse. Organizers of other voluntary markets in the US and worldwide may closely watch for these regime switching forces in order to overcome emission market crashes. The third essay compares excess skewness and kurtosis in carbon prices between CCX and EU ETS (European Union Emission Trading Scheme) Phase I and II markets, by examining the tail behavior when market expectations exceed the threshold level. Dynamic extreme value theory is used to find out the mean price exceedence of the threshold levels and estimate the risk loss. The calculated risk measures suggest that CCX and EU ETS Phase I are extremely immature markets for a risk investor, whereas EU ETS Phase II is a more stable market that could develop as a mature carbon market in future years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation is a collection of three economics essays on different aspects of carbon emission trading markets. The first essay analyzes the dynamic optimal emission control strategies of two nations. With a potential to become the largest buyer under the Kyoto Protocol, the US is assumed to be a monopsony, whereas with a large number of tradable permits on hand Russia is assumed to be a monopoly. Optimal costs of emission control programs are estimated for both the countries under four different market scenarios: non-cooperative no trade, US monopsony, Russia monopoly, and cooperative trading. The US monopsony scenario is found to be the most Pareto cost efficient. The Pareto efficient outcome, however, would require the US to make side payments to Russia, which will even out the differences in the cost savings from cooperative behavior. The second essay analyzes the price dynamics of the Chicago Climate Exchange (CCX), a voluntary emissions trading market. By examining the volatility in market returns using AR-GARCH and Markov switching models, the study associates the market price fluctuations with two different political regimes of the US government. Further, the study also identifies a high volatility in the returns few months before the market collapse. Three possible regulatory and market-based forces are identified as probable causes of market volatility and its ultimate collapse. Organizers of other voluntary markets in the US and worldwide may closely watch for these regime switching forces in order to overcome emission market crashes. The third essay compares excess skewness and kurtosis in carbon prices between CCX and EU ETS (European Union Emission Trading Scheme) Phase I and II markets, by examining the tail behavior when market expectations exceed the threshold level. Dynamic extreme value theory is used to find out the mean price exceedence of the threshold levels and estimate the risk loss. The calculated risk measures suggest that CCX and EU ETS Phase I are extremely immature markets for a risk investor, whereas EU ETS Phase II is a more stable market that could develop as a mature carbon market in future years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the growing commercial importance of the Internet and the development of new real-time, connection-oriented services like IP-telephony and electronic commerce resilience is becoming a key issue in the design of TP-based networks. Two emerging technologies, which can accomplish the task of efficient information transfer, are Multiprotocol Label Switching (MPLS) and Differentiated Services. A main benefit of MPLS is the ability to introduce traffic-engineering concepts due to its connection-oriented characteristic. With MPLS it is possible to assign different paths for packets through the network. Differentiated services divides traffic into different classes and treat them differently, especially when there is a shortage of network resources. In this thesis, a framework was proposed to integrate the above two technologies and its performance in providing load balancing and improving QoS was evaluated. Simulation and analysis of this framework demonstrated that the combination of MPLS and Differentiated services is a powerful tool for QoS provisioning in IP networks.