876 resultados para COMPUTER SCIENCE, THEORY


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This masters dissertation presents a novel method referred to as Problem-Based SRS aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

International audience

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Part 3: Product-Service Systems

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The past several years have seen the surprising and rapid rise of Bitcoin and other cryptocurrencies. These are decentralized peer-to-peer networks that allow users to transmit money, tocompose financial instruments, and to enforce contracts between mutually distrusting peers, andthat show great promise as a foundation for financial infrastructure that is more robust, efficientand equitable than ours today. However, it is difficult to reason about the security of cryptocurrencies. Bitcoin is a complex system, comprising many intricate and subtly-interacting protocol layers. At each layer it features design innovations that (prior to our work) have not undergone any rigorous analysis. Compounding the challenge, Bitcoin is but one of hundreds of competing cryptocurrencies in an ecosystem that is constantly evolving. The goal of this thesis is to formally reason about the security of cryptocurrencies, reining in their complexity, and providing well-defined and justified statements of their guarantees. We provide a formal specification and construction for each layer of an abstract cryptocurrency protocol, and prove that our constructions satisfy their specifications. The contributions of this thesis are centered around two new abstractions: scratch-off puzzles, and the blockchain functionality model. Scratch-off puzzles are a generalization of the Bitcoin mining algorithm, its most iconic and novel design feature. We show how to provide secure upgrades to a cryptocurrency by instantiating the protocol with alternative puzzle schemes. We construct secure puzzles that address important and well-known challenges facing Bitcoin today, including wasted energy and dangerous coalitions. The blockchain functionality is a general-purpose model of a cryptocurrency rooted in the Universal Composability cryptography theory. We use this model to express a wide range of applications, including transparent smart contracts (like those featured in Bitcoin and Ethereum), and also privacy-preserving applications like sealed-bid auctions. We also construct a new protocol compiler, called Hawk, which translates user-provided specifications into privacy-preserving protocols based on zero-knowledge proofs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Part 10: Sustainability and Trust

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Part 8: Business Strategies Alignment

Relevância:

90.00% 90.00%

Publicador:

Resumo:

(Deep) neural networks are increasingly being used for various computer vision and pattern recognition tasks due to their strong ability to learn highly discriminative features. However, quantitative analysis of their classication ability and design philosophies are still nebulous. In this work, we use information theory to analyze the concatenated restricted Boltzmann machines (RBMs) and propose a mutual information-based RBM neural networks (MI-RBM). We develop a novel pretraining algorithm to maximize the mutual information between RBMs. Extensive experimental results on various classication tasks show the eectiveness of the proposed approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Apresentase um breve resumo histrico da evoluo da amostragem por transectos lineares e desenvolvese a sua teoria. Descrevemos a teoria de amostragem por transectos lineares, proposta por Buckland (1992), sendo apresentados os pontos mais relevantes, no que diz respeito modelao da funo de deteco. Apresentamos uma descrio do princpio CDM (Rissanen, 1978) e a sua aplicao estimao de uma funo densidade por um histograma (Kontkanen e Myllymki, 2006), procedendo aplicao de um exemplo prtico, recorrendo a uma mistura de densidades. Procedemos sua aplicao ao clculo do estimador da probabilidade de deteco, no caso dos transectos lineares e desta forma estimar a densidade populacional de animais. Analisamos dois casos prticos, clssicos na amostragem por distncias, comparando os resultados obtidos. De forma a avaliar a metodologia, simulmos vrios conjuntos de observaes, tendo como base o exemplo das estacas, recorrendo s funes de deteco semi-normal, taxa de risco, exponencial e uniforme com um cosseno. Os resultados foram obtidos com o programa DISTANCE (Thomas et al., in press) e um algoritmo escrito em linguagem C, cedido pelo Professor Doutor Petri Kontkanen (Departamento de Cincias da Computao, Universidade de Helsnquia). Foram desenvolvidos programas de forma a calcular intervalos de confiana recorrendo tcnica bootstrap (Efron, 1978). So discutidos os resultados finais e apresentadas sugestes de desenvolvimentos futuros. ABSTRACT; We present a brief historical note on the evolution of line transect sampling and its theoretical developments. We describe line transect sampling theory as proposed by Buckland (1992), and present the most relevant issues about modeling the detection function. We present a description of the CDM principle (Rissanen, 1978) and its application to histogram density estimation (Kontkanen and Myllymki, 2006), with a practical example, using a mixture of densities. We proceed with the application and estimate probability of detection and animal population density in the context of line transect sampling. Two classical examples from the literature are analyzed and compared. ln order to evaluate the proposed methodology, we carry out a simulation study based on a wooden stakes example, and using as detection functions half normal, hazard rate, exponential and uniform with a cosine term. The results were obtained using program DISTANCE (Thomas et al., in press), and an algorithm written in C language, kindly offered by Professor Petri Kontkanen (Department of Computer Science, University of Helsinki). We develop some programs in order to estimate confidence intervals using the bootstrap technique (Efron, 1978). Finally, the results are presented and discussed with suggestions for future developments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a model, based on the work of Brock and Durlauf, which looks at how agents make choices between competing technologies, as a framework for exploring aspects of the economics of the adoption of privacy-enhancing technologies. In order to formulate a model of decision-making among choices of technologies by these agents, we consider the following: context, the setting in which and the purpose for which a given technology is used; requirement, the level of privacy that the technology must provide for an agent to be willing to use the technology in a given context; belief, an agents perception of the level of privacy provided by a given technology in a given context; and the relative value of privacy, how much an agent cares about privacy in this context and how willing an agent is to trade off privacy for other attributes. We introduce these concepts into the model, admitting heterogeneity among agents in order to capture variations in requirement, belief, and relative value in the population. We illustrate the model with two examples: the possible effects on the adoption of iOS devices being caused by the recent AppleFBI case; and the recent revelations about the non-deletion of images on the adoption of Snapchat.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difciles de comprender y predecir. Pese a ello, la prediccin es una tarea fundamental para la gestin empresarial y para la toma de decisiones que implica siempre un riesgo. Los mtodos clsicos de prediccin (entre los cuales estn: la regresin lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemtica y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales mtodos. Pues bien, en las ltimas dcadas nuevos mtodos de prediccin han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los ms promisorios son los mtodos de prediccin bio-inspirados (ej. redes neuronales, algoritmos genticos /evolutivos y sistemas inmunes artificiales). Este artculo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los mtodos bio-inspirados de prediccin en la administracin.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mathematics can be found all over the world, even in what could be considered an unrelated area, like fiber arts. In knitting, crochet, and counted-thread embroidery, we can find concepts of algebra, graph theory, number theory, geometry of transformations, and symmetry, as well as computer science. For example, many fiber art pieces embody notions related with groups of symmetry. In this work, we focus on two areas of Mathematics associated with knitting, crochet, and cross-stitch works number theory and geometry of transformations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider piecewise defined differential dynamical systems which can be analysed through symbolic dynamics and transition matrices. We have a continuous regime, where the time flow is characterized by an ordinary differential equation (ODE) which has explicit solutions, and the singular regime, where the time flow is characterized by an appropriate transformation. The symbolic codification is given through the association of a symbol for each distinct regular system and singular system. The transition matrices are then determined as linear approximations to the symbolic dynamics. We analyse the dependence on initial conditions, parameter variation and the occurrence of global strange attractors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Following up on earlier work on the $q\bar{q}$-bound-state problem using a covariant, chiral-symmetric formalism based upon the Covariant Spectator Theory, we study the heavylight case for both pseudoscalar and vector mesons. Derived directly in Minkowski space, our approach approximates the full BetheSalpeter-equation, taking into account, effectively, the contributions of both ladder and crossed ladder diagrams in the kernel. Results for several mass spectra using a relativistic covariant generalization of a Cornell plus a constant potential to model the interquark interaction are given and discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.