7 resultados para Large-scale experiments

em DRUM (Digital Repository at the University of Maryland)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer-to-peer information sharing has fundamentally changed customer decision-making process. Recent developments in information technologies have enabled digital sharing platforms to influence various granular aspects of the information sharing process. Despite the growing importance of digital information sharing, little research has examined the optimal design choices for a platform seeking to maximize returns from information sharing. My dissertation seeks to fill this gap. Specifically, I study novel interventions that can be implemented by the platform at different stages of the information sharing. In collaboration with a leading for-profit platform and a non-profit platform, I conduct three large-scale field experiments to causally identify the impact of these interventions on customers’ sharing behaviors as well as the sharing outcomes. The first essay examines whether and how a firm can enhance social contagion by simply varying the message shared by customers with their friends. Using a large randomized field experiment, I find that i) adding only information about the sender’s purchase status increases the likelihood of recipients’ purchase; ii) adding only information about referral reward increases recipients’ follow-up referrals; and iii) adding information about both the sender’s purchase as well as the referral rewards increases neither the likelihood of purchase nor follow-up referrals. I then discuss the underlying mechanisms. The second essay studies whether and how a firm can design unconditional incentive to engage customers who already reveal willingness to share. I conduct a field experiment to examine the impact of incentive design on sender’s purchase as well as further referral behavior. I find evidence that incentive structure has a significant, but interestingly opposing, impact on both outcomes. The results also provide insights about senders’ motives in sharing. The third essay examines whether and how a non-profit platform can use mobile messaging to leverage recipients’ social ties to encourage blood donation. I design a large field experiment to causally identify the impact of different types of information and incentives on donor’s self-donation and group donation behavior. My results show that non-profits can stimulate group effect and increase blood donation, but only with group reward. Such group reward works by motivating a different donor population. In summary, the findings from the three studies will offer valuable insights for platforms and social enterprises on how to engineer digital platforms to create social contagion. The rich data from randomized experiments and complementary sources (archive and survey) also allows me to test the underlying mechanism at work. In this way, my dissertation provides both managerial implication and theoretical contribution to the phenomenon of peer-to-peer information sharing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compaction control using lightweight deflectometers (LWD) is currently being evaluated in several states and countries and fully implemented for pavement construction quality assurance (QA) by a few. Broader implementation has been hampered by the lack of a widely recognized standard for interpreting the load and deflection data obtained during construction QA testing. More specifically, reliable and practical procedures are required for relating these measurements to the fundamental material property—modulus—used in pavement design. This study presents a unique set of data and analyses for three different LWDs on a large-scale controlled-condition experiment. Three 4.5x4.5 m2 test pits were designed and constructed at target moisture and density conditions simulating acceptable and unacceptable construction quality. LWD testing was performed on the constructed layers along with static plate loading testing, conventional nuclear gauge moisture-density testing, and non-nuclear gravimetric and volumetric water content measurements. Additional material was collected for routine and exploratory tests in the laboratory. These included grain size distributions, soil classification, moisture-density relations, resilient modulus testing at optimum and field conditions, and an advanced experiment of LWD testing on top of the Proctor compaction mold. This unique large-scale controlled-condition experiment provides an excellent high quality resource of data that can be used by future researchers to find a rigorous, theoretically sound, and straightforward technique for standardizing LWD determination of modulus and construction QA for unbound pavement materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lesson plan published in Critical Pedagogy Handbook, vol. 2

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this dissertation is to evaluate the potential downstream influence of the Indian Ocean (IO) on El Niño/Southern Oscillation (ENSO) forecasts through the oceanic pathway of the Indonesian Throughflow (ITF), atmospheric teleconnections between the IO and Pacific, and assimilation of IO observations. Also the impact of sea surface salinity (SSS) in the Indo-Pacific region is assessed to try to address known problems with operational coupled model precipitation forecasts. The ITF normally drains warm fresh water from the Pacific reducing the mixed layer depths (MLD). A shallower MLD amplifies large-scale oceanic Kelvin/Rossby waves thus giving ~10% larger response and more realistic ENSO sea surface temperature (SST) variability compared to observed when the ITF is open. In order to isolate the impact of the IO sector atmospheric teleconnections to ENSO, experiments are contrasted that selectively couple/decouple the interannual forcing in the IO. The interannual variability of IO SST forcing is responsible for 3 month lagged widespread downwelling in the Pacific, assisted by off-equatorial curl, leading to warmer NINO3 SST anomaly and improved ENSO validation (significant from 3-9 months). Isolating the impact of observations in the IO sector using regional assimilation identifies large-scale warming in the IO that acts to intensify the easterlies of the Walker circulation and increases pervasive upwelling across the Pacific, cooling the eastern Pacific, and improving ENSO validation (r ~ 0.05, RMS~0.08C). Lastly, the positive impact of more accurate fresh water forcing is demonstrated to address inadequate precipitation forecasts in operational coupled models. Aquarius SSS assimilation improves the mixed layer density and enhances mixing, setting off upwelling that eventually cools the eastern Pacific after 6 months, counteracting the pervasive warming of most coupled models and significantly improving ENSO validation from 5-11 months. In summary, the ITF oceanic pathway, the atmospheric teleconnection, the impact of observations in the IO, and improved Indo-Pacific SSS are all responsible for ENSO forecast improvements, and so each aspect of this study contributes to a better overall understanding of ENSO. Therefore, the upstream influence of the IO should be thought of as integral to the functioning of ENSO phenomenon.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Racism continues to thrive on the Internet. Yet, little is known about racism in online settings and the potential consequences. The purpose of this study was to develop the Perceived Online Racism Scale (PORS), the first measure to assess people’s perceived online racism experiences as they interact with others and consume information on the Internet. Items were developed through a multi-stage process based on literature review, focus-groups, and qualitative data collection. Based on a racially diverse large-scale sample (N = 1023), exploratory and confirmatory factor analyses provided support for a 30-item bifactor model with the following three factors: (a) 14-item PORS-IP (personal experiences of racism in online interactions), (b) 5-item PORS-V (observations of other racial/ethnic minorities being offended), and (c) 11-item PORS-I (consumption of online contents and information denigrating racial/ethnic minorities and highlighting racial injustice in society). Initial construct validity examinations suggest that PORS is significantly linked to psychological distress.