941 resultados para Computers -- Computer Science
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
We present an Integrated Environment suitable for learning and teaching computer programming which is designed for both students of specialised Computer Science courses, and also non-specialist students such as those following Liberal Arts. The environment is rich enough to allow exploration of concepts from robotics, artificial intelligence, social science, and philosophy as well as the specialist areas of operating systems and the various computer programming paradigms.
Resumo:
General-purpose parallel processing for solving day-to-day industrial problems has been slow to develop, partly because of the lack of suitable hardware from well-established, mainstream computer manufacturers and suitably parallelized application software. The parallelization of a CFD-(computational fluid dynamics) flow solution code is known as ESAUNA. This code is part of SAUNA, a large CFD suite aimed at computing the flow around very complex aircraft configurations including complete aircraft. A novel feature of the SAUNA suite is that it is designed to use either block-structured hexahedral grids, unstructured tetrahedral grids, or a hybrid combination of both grid types. ESAUNA is designed to solve the Euler equations or the Navier-Stokes equations, the latter in conjunction with various turbulence models. Two fundamental parallelization concepts are used—namely, grid partitioning and encapsulation of communications. Grid partitioning is applied to both block-structured grid modules and unstructured grid modules. ESAUNA can also be coupled with other simulation codes for multidisciplinary computations such as flow simulations around an aircraft coupled with flutter prediction for transient flight simulations.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
A dissertation submitted in fulfillment of the requirements to the degree of Master in Computer Science and Computer Engineering
Resumo:
This work proposes to adjust the Notification Oriented Paradigm (NOP) so that it provides support to fuzzy concepts. NOP is inspired by elements of imperative and declarative paradigms, seeking to solve some of the drawbacks of both. By decomposing an application into a network of smaller computational entities that are executed only when necessary, NOP eliminates the need to perform unnecessary computations and helps to achieve better logical-causal uncoupling, facilitating code reuse and application distribution over multiple processors or machines. In addition, NOP allows to express the logical-causal knowledge at a high level of abstraction, through rules in IF-THEN format. Fuzzy systems, in turn, perform logical inferences on causal knowledge bases (IF-THEN rules) that can deal with problems involving uncertainty. Since PON uses IF-THEN rules in an alternative way, reducing redundant evaluations and providing better decoupling, this research has been carried out to identify, propose and evaluate the necessary changes to be made on NOP allowing to be used in the development of fuzzy systems. After that, two fully usable materializations were created: a C++ framework, and a complete programming language (LingPONFuzzy) that provide support to fuzzy inference systems. From there study cases have been created and several tests cases were conducted, in order to validate the proposed solution. The test results have shown a significant reduction in the number of rules evaluated in comparison to a fuzzy system developed using conventional tools (frameworks), which could represent an improvement in performance of the applications.
Resumo:
International audience
Resumo:
A large percentage of Vanier College's technology students do not attain their College degrees within the scheduled three years of their program. A closer investigation of the problem revealed that in many of these cases these students had completed all of their program professional courses but they had not completed all of the required English and/or Humanities courses. Fortunately, most of these students do extend their stay at the college for the one or more semesters required for graduation, although some choose to go on into the workforce without returning to complete the missing English and/or Humanities and without their College Degrees. The purpose of this research was to discover if there was any significant measure of association between a student's family linguistic background, family cultural background, high school average, and/or College English Placement Test results and his or her likelihood of succeeding in his or her English and/or Humanities courses within the scheduled three years of the program. Because of both demographic differences between 'hard' and 'soft' technologies, including student population, more specifically gender ratios and student average ages in specific programs; and program differences, including program writing requirements and types of practical skill activities required; in order to have a more uniform sample, the research was limited to the hard technologies where students work hands-on with hardware and/or computers and tend to have overall low research and writing requirements. Based on a review of current literature and observations made in one of the hard technology programs at Vanier College, eight research questions were developed. These questions were designed to examine different aspects of success in the English and Humanities courses such as failure and completion rates and the number of courses remaining after the end of the fifth semester and as well examine how the students assessed their ability to communicate in English. The eight research questions were broken down into a total of 54 hypotheses. The high number of hypotheses was required to address a total of seven independent variables: primary home language, high school language of instruction, student's place of birth (Canada, Not-Canada), student's parents' place of birth (Both-born-in-Canada, Not-both-born-in-Canada), high school averages and English placement level (as a result of the College English Entry Test); and eleven dependent variables: number of English completed, number of English failed, whether all English were completed by the end of the 5th semester (yes, no), number of Humanities courses completed, number of Humanities courses failed, whether all the Humanities courses were completed by the end of the 5th semester (yes, no), the total number of English and Humanities courses left, and the students' assessments of their ability to speak, read and write in English. The data required to address the hypotheses were collected from two sources, from the students themselves and from the College. Fifth and sixth semester students from Building Engineering Systems, Computer and Digital Systems, Computer Science and Industrial Electronics Technology Programs were surveyed to collect personal information including family cultural and linguistic history and current language usages, high school language of instruction, perceived fluency in speaking, reading and writing in English and perceived difficulty in completing English and Humanities courses. The College was able to provide current academic information on each of the students, including copies of college program planners and transcripts, and high school transcripts for students who attended a high school in Quebec. Quantitative analyses were done on the data using the SPSS statistical analysis program. Of the fifty-four hypotheses analysed, in fourteen cases the results supported the research hypotheses, in the forty other cases the null hypotheses had to be accepted. One of the findings was that there was a strong significant association between a student's primary home language and place of birth and his or her perception of his or her ability to communicate in English (speak, read, and write) signifying that both students whose primary home language was not English and students who were not born in Canada, considered themselves, on average, to be weaker in these skills than did students whose primary home language was English. Although this finding was noteworthy, the two most significant findings were the association found between a student's English entry placement level and the number of English courses failed and the association between the parents' place of birth and the student's likelihood of succeeding in both his or her English and Humanities courses. According to the research results, the mean number of English courses failed, on average, by students placed in the lowest entry level of College English was significantly different from the number of English courses failed by students placed in any of the other entry level English courses. In this sample students who were placed in the lowest entry level of College English failed, on average, at least three times as many English courses as those placed in any of the other English entry level courses. These results are significant enough that they will be brought to the attention of the appropriate College administration. The results of this research also appeared to indicate that the most significant determining factor in a student's likelihood of completing his or her English and Humanities courses is his or her parents' place of birth (both-born-in-Canada or not-both-born-in-Canada). Students who had at least one parent who was not born in Canada, would, on average, fail a significantly higher number of English courses, be significantly more likely to still have at least one English course left to complete by the end of the 5th semester, fail a significantly higher number of Humanities courses, be significantly more likely to still have at least one Humanities course to complete by the end of the 5th semester and have significantly more combined English and Humanities courses to complete at the end of their 5th semester than students with both parents born in Canada. This strong association between students' parents' place of birth and their likelihood of succeeding in their English and Humanities courses within the three years of their program appears to indicate that acculturation may be a more significant factor than either language or high school averages, for which no significant association was found for any of the English and Humanities related dependent variables. Although the sample size for this research was only 60 students and more research needs to be conducted in this area, to see if these results are supported with other groups within the College, these results are still significant. If the College can identify, at admission, the students who will be more likely to have difficulty in completing their English and Humanities courses, the College will now have the opportunity to intercede during or before the first semester, and offer these students the support they require in order to increase their chances of success in their education, whether it be classes or courses designed to meet their specific needs, special mentoring, tutoring or other forms of support. With the necessary support, the identified students will have a greater opportunity of successfully completing their programs within the scheduled three years, while at the same time the College will have improved its capacity to meeting the needs of its students.
Resumo:
Se calculó la obtención de las constantes ópticas usando el método de Wolfe. Dichas contantes: coeficiente de absorción (α), índice de refracción (n) y espesor de una película delgada (d ), son de importancia en el proceso de caracterización óptica del material. Se realizó una comparación del método del Wolfe con el método empleado por R. Swanepoel. Se desarrolló un modelo de programación no lineal con restricciones, de manera que fue posible estimar las constantes ópticas de películas delgadas semiconductoras, a partir únicamente, de datos de transmisión conocidos. Se presentó una solución al modelo de programación no lineal para programación cuadrática. Se demostró la confiabilidad del método propuesto, obteniendo valores de α = 10378.34 cm−1, n = 2.4595, d =989.71 nm y Eg = 1.39 Ev, a través de experimentos numéricos con datos de medidas de transmitancia espectral en películas delgadas de Cu3BiS3.
Resumo:
One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.
Resumo:
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.
Resumo:
This paper presents SMarty, a variability management approach for UML-based software product lines (PL). SMarty is supported by a UML profile, the SMartyProfile, and a process for managing variabilities, the SMartyProcess. SMartyProfile aims at representing variabilities, variation points, and variants in UML models by applying a set of stereotypes. SMartyProcess consists of a set of activities that is systematically executed to trace, identify, and control variabilities in a PL based on SMarty. It also identifies variability implementation mechanisms and analyzes specific product configurations. In addition, a more comprehensive application of SMarty is presented using SEI's Arcade Game Maker PL. An evaluation of SMarty and related work are discussed.