961 resultados para Applied Mathematics|Computer Engineering|Computer science
Resumo:
In this thesis, we present a quantitative approach using probabilistic verification techniques for the analysis of reliability, availability, maintainability, and safety (RAMS) properties of satellite systems. The subject of our research is satellites used in mission critical industrial applications. A strong case for using probabilistic model checking to support RAMS analysis of satellite systems is made by our verification results. This study is intended to build a foundation to help reliability engineers with a basic background in model checking to apply probabilistic model checking to small satellite systems. We make two major contributions. One of these is the approach of RAMS analysis to satellite systems. In the past, RAMS analysis has been extensively applied to the field of electrical and electronics engineering. It allows system designers and reliability engineers to predict the likelihood of failures from the indication of historical or current operational data. There is a high potential for the application of RAMS analysis in the field of space science and engineering. However, there is a lack of standardisation and suitable procedures for the correct study of RAMS characteristics for satellite systems. This thesis considers the promising application of RAMS analysis to the case of satellite design, use, and maintenance, focusing on its system segments. Data collection and verification procedures are discussed, and a number of considerations are also presented on how to predict the probability of failure. Our second contribution is leveraging the power of probabilistic model checking to analyse satellite systems. We present techniques for analysing satellite systems that differ from the more common quantitative approaches based on traditional simulation and testing. These techniques have not been applied in this context before. We present the use of probabilistic techniques via a suite of detailed examples, together with their analysis. Our presentation is done in an incremental manner: in terms of complexity of application domains and system models, and a detailed PRISM model of each scenario. We also provide results from practical work together with a discussion about future improvements.
Resumo:
Part 6: Engineering and Implementation of Collaborative Networks
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
In this seminar, I will share my experience in the early process of becoming an entrepreneur from a research background. Since 2008, I have been working with Prof. Mike Wald on an innovative video annotation tool called Synote. After about eight years of research around Synote, I have applied for the Royal Acadamy of Engineering Enterprise Fellowship in order to focus on developing Synote for real clients and making Synote sustainable and profitable. Now, it is already eight months into the fellowship, which has totally changed my life. It is very exciting, but at the same time I'm struggling all the time. The seminar will briefly go through my experience so far on the way of commercializing Synote from a research background. I will also discuss the valuable resources you can get from RAEng Enterprise Hub and Future Worlds, which is a Southampton based organization to help startups. If you are a Ph.D. student or research fellow in the University, and you want to start your own business, this is the seminar you want to attend.
Resumo:
Mathematics can be found all over the world, even in what could be considered an unrelated area, like fiber arts. In knitting, crochet, and counted-thread embroidery, we can find concepts of algebra, graph theory, number theory, geometry of transformations, and symmetry, as well as computer science. For example, many fiber art pieces embody notions related with groups of symmetry. In this work, we focus on two areas of Mathematics associated with knitting, crochet, and cross-stitch works – number theory and geometry of transformations.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of-reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced-assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
The world of Computational Biology and Bioinformatics presently integrates many different expertise, including computer science and electronic engineering. A major aim in Data Science is the development and tuning of specific computational approaches to interpret the complexity of Biology. Molecular biologists and medical doctors heavily rely on an interdisciplinary expert capable of understanding the biological background to apply algorithms for finding optimal solutions to their problems. With this problem-solving orientation, I was involved in two basic research fields: Cancer Genomics and Enzyme Proteomics. For this reason, what I developed and implemented can be considered a general effort to help data analysis both in Cancer Genomics and in Enzyme Proteomics, focusing on enzymes which catalyse all the biochemical reactions in cells. Specifically, as to Cancer Genomics I contributed to the characterization of intratumoral immune microenvironment in gastrointestinal stromal tumours (GISTs) correlating immune cell population levels with tumour subtypes. I was involved in the setup of strategies for the evaluation and standardization of different approaches for fusion transcript detection in sarcomas that can be applied in routine diagnostic. This was part of a coordinated effort of the Sarcoma working group of "Alleanza Contro il Cancro". As to Enzyme Proteomics, I generated a derived database collecting all the human proteins and enzymes which are known to be associated to genetic disease. I curated the data search in freely available databases such as PDB, UniProt, Humsavar, Clinvar and I was responsible of searching, updating, and handling the information content, and computing statistics. I also developed a web server, BENZ, which allows researchers to annotate an enzyme sequence with the corresponding Enzyme Commission number, the important feature fully describing the catalysed reaction. More to this, I greatly contributed to the characterization of the enzyme-genetic disease association, for a better classification of the metabolic genetic diseases.
Resumo:
Sketches are a unique way to communicate: drawing a simple sketch does not require any training, sketches convey information that is hard to describe with words, they are powerful enough to represent almost any concept, and nowadays, it is possible to draw directly from mobile devices. Motivated from the unique characteristics of sketches and fascinated by the human ability to imagine 3D objects from drawings, this thesis focuses on automatically associating geometric information to sketches. The main research directions of the thesis can be summarized as obtaining geometric information from freehand scene sketches to improve 2D sketch-based tasks and investigating Vision-Language models to overcome 3D sketch-based tasks limitations. The first part of the thesis concerns geometric information prediction from scene sketches improving scene sketch to image generation and unlocking new creativity effects. The thesis proceeds showing a study conducted on the Vision-Language models embedding space considering sketches, line renderings and RGB renderings of 3D shape to overcome the use of supervised datasets for 3D sketch-based tasks, that are limited and hard to acquire. Following the obtained observations and results, Vision-Language models are applied to Sketch Based Shape Retrieval without the need of training on supervised datasets. We then analyze the use of Vision-Language models for sketch based 3D reconstruction in an unsupervised manner. In the final chapter we report the results obtained in an additional project carried during the PhD, which has lead to the development of a framework to learn an embedding space of neural networks that can be navigated to get ready-to-use models with desired characteristics.
Resumo:
This thesis reports on the two main areas of our research: introductory programming as the traditional way of accessing informatics and cultural teaching informatics through unconventional pathways. The research on introductory programming aims to overcome challenges in traditional programming education, thus increasing participation in informatics. Improving access to informatics enables individuals to pursue more and better professional opportunities and contribute to informatics advancements. We aimed to balance active, student-centered activities and provide optimal support to novices at their level. Inspired by Productive Failure and exploring the concept of notional machine, our work focused on developing Necessity Learning Design, a design to help novices tackle new programming concepts. Using this design, we implemented a learning sequence to introduce arrays and evaluated it in a real high-school context. The subsequent chapters discuss our experiences teaching CS1 in a remote-only scenario during the COVID-19 pandemic and our collaborative effort with primary school teachers to develop a learning module for teaching iteration using a visual programming environment. The research on teaching informatics principles through unconventional pathways, such as cryptography, aims to introduce informatics to a broader audience, particularly younger individuals that are less technical and professional-oriented. It emphasizes the importance of understanding informatics's cultural and scientific aspects to focus on the informatics societal value and its principles for active citizenship. After reflecting on computational thinking and inspired by the big ideas of science and informatics, we describe our hands-on approach to teaching cryptography in high school, which leverages its key scientific elements to emphasize its social aspects. Additionally, we present an activity for teaching public-key cryptography using graphs to explore fundamental concepts and methods in informatics and mathematics and their interdisciplinarity. In broadening the understanding of informatics, these research initiatives also aim to foster motivation and prime for more professional learning of informatics.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
This thesis project is framed in the research field of Physics Education and aims to contribute to the reflection on the importance of disciplinary identities in addressing interdisciplinarity through the lens of the Nature of Science (NOS). In particular, the study focuses on the module on the parabola and parabolic motion, which was designed within the EU project IDENTITIES. The project aims to design modules to innovate pre-service teacher education according to contemporary challenges, focusing on interdisciplinarity in curricular and STEM topics (especially between physics, mathematics and computer science). The modules are designed according to a model of disciplines and interdisciplinarity that the project IDENTITIES has been elaborating on two main theoretical frameworks: the Family Resemblance Approach (FRA), reconceptualized for the Nature of science (Erduran & Dagher, 2014), and the boundary crossing and boundary objects framework by Akkerman and Bakker (2011). The main aim of the thesis is to explore the impact of this interdisciplinary model in the specific case of the implementation of the parabola and parabolic motion module in a context of preservice teacher education. To reach this purpose, we have analyzed some data collected during the implementation in order to investigate, in particular, the role of the FRA as a learning tool to: a) elaborate on the concept of “discipline”, within the broader problem to define interdisciplinarity; b) compare the epistemic core of physics and mathematics; c) develop epistemic skills and interdisciplinary competences in student-teachers. The analysis of the data led us to recognize three different roles played by the FRA: FRA as epistemological activator, FRA as scaffolding for reasoning and navigating (inhabiting) the complexity, and FRA as lens to investigate the relationship between physics and mathematics in the historical case.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
A model where agents show discrete behavior regarding their actions, but have continuous opinions that are updated by interacting with other agents is presented. This new updating rule is applied to both the voter and Sznajd models for interaction between neighbors, and its consequences are discussed. The appearance of extremists is naturally observed and it seems to be a characteristic of this model.