902 resultados para Math Applications in Computer Science
Resumo:
The aim of this dissertation was to investigate flexible polymer-nanoparticle composites with unique magnetic and electrical properties. Toward this goal, two distinct projects were carried out. The first project explored the magneto-dielectric properties and morphology of flexible polymer-nanoparticle composites that possess high permeability (µ), high permittivity (ε) and minimal dielectric, and magnetic loss (tan δε, tan δµ). The main materials challenges were the synthesis of magnetic nanoparticle fillers displaying high saturation magnetization (Ms), limited coercivity, and their homogeneous dispersion in a polymeric matrix. Nanostructured magnetic fillers including polycrystalline iron core-shell nanoparticles, and constructively assembled superparamagnetic iron oxide nanoparticles were synthesized, and dispersed uniformly in an elastomer matrix to minimize conductive losses. The resulting composites have demonstrated promising permittivity (22.3), permeability (3), and sustained low dielectric (0.1), magnetic (0.4) loss for frequencies below 2 GHz. This study demonstrated nanocomposites with tunable magnetic resonance frequency, which can be used to develop compact and flexible radio frequency devices with high efficiency. The second project focused on fundamental research regarding methods for the design of highly conductive polymer-nanoparticle composites that can maintain high electrical conductivity under tensile strain exceeding 100%. We investigated a simple solution spraying method to fabricate stretchable conductors based on elastomeric block copolymer fibers and silver nanoparticles. Silver nanoparticles were assembled both in and around block copolymer fibers forming interconnected dual nanoparticle networks, resulting in both in-fiber conductive pathways and additional conductive pathways on the outer surface of the fibers. Stretchable composites with conductivity values reaching 9000 S/cm maintained 56% of their initial conductivity after 500 cycles at 100% strain. The developed manufacturing method in this research could pave the way towards direct deposition of flexible electronic devices on any shaped substrate. The electrical and electromechanical properties of these dual silver nanoparticle network composites make them promising materials for the future construction of stretchable circuitry for displays, solar cells, antennas, and strain and tactility sensors.
Resumo:
This article presents an interdisciplinary experience that brings together two areas of computer science; didactics and philosophy. As such, the article introduces a relatively unexplored area of research, not only in Uruguay but in the whole Latin American region. The reflection on the ontological status of computer science, its epistemic and educational problems, as well as their relationship with technology, allows us to elaborate a critical analysis of the discipline and a social perception of it as a basic science.
Resumo:
The primary goals of this study are to: embed sustainable concepts of energy consumption into certain part of existing Computer Science curriculum for English schools; investigate how to motivate 7-to-11 years old kids to learn these concepts; promote responsive ICT (Information and Communications Technology) use by these kids in their daily life; raise their awareness of today’s ecological challenges. Sustainability-related ICT lessons developed aim to provoke computational thinking and creativity to foster understanding of environmental impact of ICT and positive environmental impact of small changes in user energy consumption behaviour. The importance of including sustainability into the Computer Science curriculum is due to the fact that ICT is both a solution and one of the causes of current world ecological problems. This research follows Agile software development methodology. In order to achieve the aforementioned goals, sustainability requirements, curriculum requirements and technical requirements are firstly analysed. Secondly, the web-based user interface is designed. In parallel, a set of three online lessons (video, slideshow and game) is created for the website GreenICTKids.com taking into account several green design patterns. Finally, the evaluation phase involves the collection of adults’ and kids’ feedback on the following: user interface; contents; user interaction; impacts on the kids’ sustainability awareness and on the kids’ behaviour with technologies. In conclusion, a list of research outcomes is as follows: 92% of the adults learnt more about energy consumption; 80% of the kids are motivated to learn about energy consumption and found the website easy to use; 100% of the kids understood the contents and liked website’s visual aspect; 100% of the kids will try to apply in their daily life what they learnt through the online lessons.
Resumo:
The atomic-level structure and chemistry of materials ultimately dictate their observed macroscopic properties and behavior. As such, an intimate understanding of these characteristics allows for better materials engineering and improvements in the resulting devices. In our work, two material systems were investigated using advanced electron and ion microscopy techniques, relating the measured nanoscale traits to overall device performance. First, transmission electron microscopy and electron energy loss spectroscopy (TEM-EELS) were used to analyze interfacial states at the semiconductor/oxide interface in wide bandgap SiC microelectronics. This interface contains defects that significantly diminish SiC device performance, and their fundamental nature remains generally unresolved. The impacts of various microfabrication techniques were explored, examining both current commercial and next-generation processing strategies. In further investigations, machine learning techniques were applied to the EELS data, revealing previously hidden Si, C, and O bonding states at the interface, which help explain the origins of mobility enhancement in SiC devices. Finally, the impacts of SiC bias temperature stressing on the interfacial region were explored. In the second system, focused ion beam/scanning electron microscopy (FIB/SEM) was used to reconstruct 3D models of solid oxide fuel cell (SOFC) cathodes. Since the specific degradation mechanisms of SOFC cathodes are poorly understood, FIB/SEM and TEM were used to analyze and quantify changes in the microstructure during performance degradation. Novel strategies for microstructure calculation from FIB-nanotomography data were developed and applied to LSM-YSZ and LSCF-GDC composite cathodes, aged with environmental contaminants to promote degradation. In LSM-YSZ, migration of both La and Mn cations to the grain boundaries of YSZ was observed using TEM-EELS. Few substantial changes however, were observed in the overall microstructure of the cells, correlating with a lack of performance degradation induced by the H2O. Using similar strategies, a series of LSCF-GDC cathodes were analyzed, aged in H2O, CO2, and Cr-vapor environments. FIB/SEM observation revealed considerable formation of secondary phases within these cathodes, and quantifiable modifications of the microstructure. In particular, Cr-poisoning was observed to cause substantial byproduct formation, which was correlated with drastic reductions in cell performance.
Resumo:
With wireless vehicular communications, Vehicular Ad Hoc Networks (VANETs) enable numerous applications to enhance traffic safety, traffic efficiency, and driving experience. However, VANETs also impose severe security and privacy challenges which need to be thoroughly investigated. In this dissertation, we enhance the security, privacy, and applications of VANETs, by 1) designing application-driven security and privacy solutions for VANETs, and 2) designing appealing VANET applications with proper security and privacy assurance. First, the security and privacy challenges of VANETs with most application significance are identified and thoroughly investigated. With both theoretical novelty and realistic considerations, these security and privacy schemes are especially appealing to VANETs. Specifically, multi-hop communications in VANETs suffer from packet dropping, packet tampering, and communication failures which have not been satisfyingly tackled in literature. Thus, a lightweight reliable and faithful data packet relaying framework (LEAPER) is proposed to ensure reliable and trustworthy multi-hop communications by enhancing the cooperation of neighboring nodes. Message verification, including both content and signature verification, generally is computation-extensive and incurs severe scalability issues to each node. The resource-aware message verification (RAMV) scheme is proposed to ensure resource-aware, secure, and application-friendly message verification in VANETs. On the other hand, to make VANETs acceptable to the privacy-sensitive users, the identity and location privacy of each node should be properly protected. To this end, a joint privacy and reputation assurance (JPRA) scheme is proposed to synergistically support privacy protection and reputation management by reconciling their inherent conflicting requirements. Besides, the privacy implications of short-time certificates are thoroughly investigated in a short-time certificates-based privacy protection (STCP2) scheme, to make privacy protection in VANETs feasible with short-time certificates. Secondly, three novel solutions, namely VANET-based ambient ad dissemination (VAAD), general-purpose automatic survey (GPAS), and VehicleView, are proposed to support the appealing value-added applications based on VANETs. These solutions all follow practical application models, and an incentive-centered architecture is proposed for each solution to balance the conflicting requirements of the involved entities. Besides, the critical security and privacy challenges of these applications are investigated and addressed with novel solutions. Thus, with proper security and privacy assurance, these solutions show great application significance and economic potentials to VANETs. Thus, by enhancing the security, privacy, and applications of VANETs, this dissertation fills the gap between the existing theoretic research and the realistic implementation of VANETs, facilitating the realistic deployment of VANETs.
Resumo:
Actinin and spectrin proteins are members of the Spectrin Family of Actin Crosslinking Proteins. The importance of these proteins in the cytoskeleton is demonstrated by the fact that they are common targets for disease causing mutations. In their most prominent roles, actinin and spectrin are responsible for stabilising and maintaining the muscle architecture during contraction, and providing shape and elasticity to the red blood cell in circulation, respectively. To carry out such roles, actinin and spectrin must possess important mechanical and physical properties. These attributes are desirable when choosing a building block for protein-based nanoconstruction. In this study, I assess the contribution of several disease-associated mutations in the actinin-1 actin binding domain that have recently been linked to a rare platelet disorder, congenital macrothrombocytopenia. I investigate the suitability of both actinin and spectrin proteins as potential building blocks for nanoscale structures, and I evaluate a fusion-based assembly strategy to bring about self-assembly of protein nanostructures. I report that the actinin-1 mutant proteins display increased actin binding compared to WT actinin-1 proteins. I find that both actinin and spectrin proteins exhibit enormous potential as nano-building blocks in terms of their stability and ability to self-assemble, and I successfully design and create homodimeric and heterodimeric bivalent building blocks using the fusion-based assembly strategy. Overall, this study has gathered helpful information that will contribute to furthering the advancement of actinin and spectrin knowledge in terms of their natural functions, and potential unnatural functions in protein nanotechnology.
Resumo:
The very nature of computer science with its constant changes forces those who wish to follow to adapt and react quickly. Large companies invest in being up to date in order to generate revenue and stay active on the market. Universities, on the other hand, need to imply same practices of staying up to date with industry needs in order to produce industry ready engineers. By interviewing former students, now engineers in the industry, and current university staff this thesis aims to learn if there is space for enhancing the education through different lecturing approaches and/or curriculum adaptation and development. In order to address these concerns a qualitative research has been conducted, focusing on data collection obtained through semi-structured live world interviews. The method used follows the seven stages of research interviewing introduced by Kvale and focuses on collecting and preparing relevant data for analysis. The collected data is transcribed, refined, and further on analyzed in the “Findings and analysis” chapter. The focus of analyzing was answering the three research questions; learning how higher education impacts a Computer Science and Informatics Engineers’ job, how to better undergo the transition from studies to working in the industry and how to develop a curriculum that helps support the previous two. Unaltered quoted extracts are presented and individually analyzed. To paint a better picture a theme-wise analysis is presented summing valuable themes that were repeated throughout the interviewing phase. The findings obtained imply that there are several factors directly influencing the quality of education. From the student side, it mostly concerns expectation and dedication involving studies, and from the university side it is commitment to the curriculum development process. Due to the time and resource limitations this research provides findings conducted on a narrowed scope, although it can serve as a great foundation for further development; possibly as a PhD research.
Resumo:
This is a CoLab Workshop organized as an initiative of the UT Austin | Portugal Program to reinforce the Portuguese competences in Nonlinear Mechanics and in complex problems arising from applications to the mathematical modeling and simulations in the Life Sciences. The Workshop provides a place to exchange recent developments, discoveries and progresses in this challenging research field. The main goal is to bring together doctoral candidates, postdoctoral scientists and graduates interested in the field, giving them the opportunity to make scientific interactions and new connections with established experts in the interdisciplinary topics covered by the event. Another important goal of the Workshop is to promote collaboration between members of the different areas of the UT Austin | Portugal community.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
Colloidal particles have been used to template the electrosynthesis of several materials, such as semiconductors, metals and alloys. The method allows good control over the thickness of the resulting material by choosing the appropriate charge applied to the system, and it is able to produce high density deposited materials without shrinkage. These materials are a true model of the template structure and, due to the high surface areas obtained, are very promising for use in electrochemical applications. In the present work, the assembly of monodisperse polystyrene templates was conduced over gold, platinum and glassy carbon substrates in order to show the electrodeposition of an oxide, a conducting polymer and a hybrid inorganic-organic material with applications in the supercapacitor and sensor fields. The performances of the resulting nanostructured films have been compared with the analogue bulk material and the results achieved are depicted in this paper.