4 resultados para knowledge framework
em DRUM (Digital Repository at the University of Maryland)
Resumo:
Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.
Resumo:
Problem This dissertation presents a literature-based framework for communication in science (with the elements partners, purposes, message, and channel), which it then applies in and amends through an empirical study of how geoscientists use two social computing technologies (SCTs), blogging and Twitter (both general use and tweeting from conferences). How are these technologies used and what value do scientists derive from them? Method The empirical part used a two-pronged qualitative study, using (1) purposive samples of ~400 blog posts and ~1000 tweets and (2) a purposive sample of 8 geoscientist interviews. Blog posts, tweets, and interviews were coded using the framework, adding new codes as needed. The results were aggregated into 8 geoscientist case studies, and general patterns were derived through cross-case analysis. Results A detailed picture of how geoscientists use blogs and twitter emerged, including a number of new functions not served by traditional channels. Some highlights: Geoscientists use SCTs for communication among themselves as well as with the public. Blogs serve persuasion and personal knowledge management; Twitter often amplifies the signal of traditional communications such as journal articles. Blogs include tutorials for peers, reviews of basic science concepts, and book reviews. Twitter includes links to readings, requests for assistance, and discussions of politics and religion. Twitter at conferences provides live coverage of sessions. Conclusions Both blogs and Twitter are routine parts of scientists' communication toolbox, blogs for in-depth, well-prepared essays, Twitter for faster and broader interactions. Both have important roles in supporting community building, mentoring, and learning and teaching. The Framework of Communication in Science was a useful tool in studying these two SCTs in this domain. The results should encourage science administrators to facilitate SCT use of scientists in their organization and information providers to search SCT documents as an important source of information.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
This study examines the organizational structures and decision-making processes used by school districts to recruit and hire school librarians. For students to acquire the information and technology literacy education they need, school libraries must be staffed with qualified individuals who can fulfill the librarian’s role as leader, teacher, instructional partner, information specialist, and program administrator. Principals are typically given decision rights for hiring staff, including school librarians. Research shows that principals have limited knowledge of the skills and abilities of the school librarian or the specific needs and functions of the library program. Research also indicates that those with specific knowledge of school library programs, namely school district library supervisors, are only consulted on recruiting and hiring about half the time. School districts entrust library supervisors with responsibilities such as professional development of school librarians only after they are hired. This study uses a theoretical lens from research on IT governance, which focuses on the use of knowledge-fit in applying decision rights in an organization. This framework is appropriate because of its incorporation of a specialist with a specific knowledge set in determining the placement of input and decision rights in the decision-making processes. The method used in this research was a multiple-case study design using five school districts as cases, varying by the involvement of the supervisors and other individuals in the hiring process. The data collected from each school district were interviews about the district’s recruiting and hiring practices with principals, an individual in HR, library supervisors, and recently hired school librarians. Data analysis was conducted through iterative coding from themes in the research questions, with continuous adjustments as new themes developed. Results from the study indicate that governance framework is applicable to evaluating the decision-making processes used in recruiting and hiring school librarians. However, a district’s use of governance did not consistently use knowledge-fit in the determination of input and decision rights. In the hiring process, governance was more likely to be based on placing decision rights at a certain level of the district hierarchy rather than the location of specific knowledge, most often resulting in site-based governance for decision rights at the school-building level. The governance of the recruiting process was most affected by the shortage or surplus of candidates available to the district to fill positions. Districts struggling with a shortage of candidates typically placed governance for the decision-making process on recruiting at the district level, giving the library supervisor more opportunity for input and collaboration with human resources. In districts that use site-based governance and that place all input and decision rights at the building level, some principals use their autonomy to eliminate the school library position in the allotment phase or hire librarians that, while certified through testing, do not have the same level of expertise as those who achieve certification through LIS programs. The principals in districts who use site-based governance for decision rights but call on the library supervisor for advisement stated how valuable they found the supervisor’s expertise in evaluating candidates for hire. In no district was a principal or school required to involve the library supervisor in the hiring of school librarians. With a better understanding of the tasks involved, the effect of district governance on decision-making, and the use of knowledge to assign input and decision rights, it is possible to look at how all of these factors affect the outcome in the quality of the hire. A next step is to look at the hiring process that school librarians went through and connect those with the measurable outcomes of hiring: school librarian success, retention, and attrition; the quality of school library program services, outreach, and involvement in a school; and the perceptions of the success of the school librarian and the library program as seen from students, teachers, administrators, parents, and other community stakeholders.