4 resultados para domain-specific expertise
em DRUM (Digital Repository at the University of Maryland)
Resumo:
Critical thinking in learners is a goal of educators and professional organizations in nursing as well as other professions. However, few studies in nursing have examined the role of the important individual difference factors topic knowledge, individual interest, and general relational reasoning strategies in predicting critical thinking. In addition, most previous studies have used domain-general, standardized measures, with inconsistent results. Moreover, few studies have investigated critical thinking across multiple levels of experience. The major purpose of this study was to examine the degree to which topic knowledge, individual interest, and relational reasoning predict critical thinking in maternity nurses. For this study, 182 maternity nurses were recruited from national nursing listservs explicitly chosen to capture multiple levels of experience from prelicensure to very experienced nurses. The three independent measures included a domain-specific Topic Knowledge Assessment (TKA), consisting of 24 short-answer questions, a Professed and Engaged Interest Measure (PEIM), with 20 questions indicating level of interest and engagement in maternity nursing topics and activities, and the Test of Relational Reasoning (TORR), a graphical selected response measure with 32 items organized in scales corresponding to four forms of relational reasoning: analogy, anomaly, antithesis, and antinomy. The dependent measure was the Critical Thinking Task in Maternity Nursing (CT2MN), composed of a clinical case study providing cues with follow-up questions relating to nursing care. These questions align with the cognitive processes identified in a commonly-used definition of critical thinking in nursing. Reliable coding schemes for the measures were developed for this study. Key findings included a significant correlation between topic knowledge and individual interest. Further, the three individual difference factors explained a significant proportion of the variance in critical thinking with a large effect size. While topic knowledge was the strongest predictor of critical thinking performance, individual interest had a moderate significant effect, and relational reasoning had a small but significant effect. The findings suggest that these individual difference factors should be included in future studies of critical thinking in nursing. Implications for nursing education, research, and practice are discussed.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Audit firms are organized along industry lines and industry specialization is a prominent feature of the audit market. Yet, we know little about how audit firms make their industry portfolio decisions, i.e., how audit firms decide which set of industries to specialize in. In this study, I examine how the linkages between industries in the product space affect audit firms’ industry portfolio choice. Using text-based product space measures to capture these industry linkages, I find that both Big 4 and small audit firms tend to specialize in industry-pairs that 1) are close to each other in the product space (i.e., have more similar product language) and 2) have a greater number of “between-industries” in the product space (i.e., have a greater number of industries with product language that is similar to both industries in the pair). Consistent with the basic tradeoff between specialization and coordination, these results suggest that specializing in industries that have more similar product language and more linkages to other industries in the product space allow audit firms greater flexibility to transfer industry-specific expertise across industries as well as greater mobility in the product space, hence enhancing its competitive advantage. Additional analysis using the collapse of Arthur Andersen as an exogenous supply shock in the audit market finds consistent results. Taken together, the findings suggest that industry linkages in the product space play an important role in shaping the audit market structure.
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.