958 resultados para Information complexity
Resumo:
Les régions nordiques à pergélisol seront largement affectées par l'augmentation prévue des températures. Un nombre croissant d’infrastructures qui étaient autrefois construites avec confiance sur des sols gelés en permanence commencent déjà à montrer des signes de détérioration. Les processus engendrés par la dégradation du pergélisol peuvent causer des dommages importants aux infrastructures et entrainer des coûts élevés de réparation. En conséquence, le contexte climatique actuel commande que la planification des projets dans les régions nordiques s’effectue en tenant compte des impacts potentiels de la dégradation du pergélisol. Ce mémoire porte sur l’utilisation de systèmes d’information géographique (SIG) appliqués à l’évaluation du potentiel d’aménagement des territoires situés en milieu de pergélisol. En utilisant une approche SIG, l’objectif est d’élaborer une méthodologie permettant de produire des cartes d'évaluation des risques afin d’aider les collectivités nordiques à mieux planifier leur environnement bâti. Une analyse multi-échelle du paysage est nécessaire et doit inclure l'étude des dépôts de surface, la topographie, ainsi que les conditions du pergélisol, la végétation et les conditions de drainage. La complexité de l'ensemble des interactions qui façonnent le paysage est telle qu'il est pratiquement impossible de rendre compte de chacun d'eux ou de prévoir avec certitude la réponse du système suite à des perturbations. Ce mémoire présente aussi certaines limites liées à l’utilisation des SIG dans ce contexte spécifique et explore une méthode innovatrice permettant de quantifier l'incertitude dans les cartes d'évaluation des risques.
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. Here we study a new type of restarting automaton, the so-called t-sRL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size 1 only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. We focus on the descriptional complexity of these automata, establishing two complexity measures that are both based on the description of t-sRL-automata in terms of so-called meta-instructions. We present some hierarchy results as well as a non-recursive trade-off between deterministic 2-sRL-automata and finite-state acceptors.
Resumo:
This thesis attempts to quantify the amount of information needed to learn certain tasks. The tasks chosen vary from learning functions in a Sobolev space using radial basis function networks to learning grammars in the principles and parameters framework of modern linguistic theory. These problems are analyzed from the perspective of computational learning theory and certain unifying perspectives emerge.
Resumo:
Each player in the financial industry, each bank, stock exchange, government agency, or insurance company operates its own financial information system or systems. By its very nature, financial information, like the money that it represents, changes hands. Therefore the interoperation of financial information systems is the cornerstone of the financial services they support. E-services frameworks such as web services are an unprecedented opportunity for the flexible interoperation of financial systems. Naturally the critical economic role and the complexity of financial information led to the development of various standards. Yet standards alone are not the panacea: different groups of players use different standards or different interpretations of the same standard. We believe that the solution lies in the convergence of flexible E-services such as web-services and semantically rich meta-data as promised by the semantic Web; then a mediation architecture can be used for the documentation, identification, and resolution of semantic conflicts arising from the interoperation of heterogeneous financial services. In this paper we illustrate the nature of the problem in the Electronic Bill Presentment and Payment (EBPP) industry and the viability of the solution we propose. We describe and analyze the integration of services using four different formats: the IFX, OFX and SWIFT standards, and an example proprietary format. To accomplish this integration we use the COntext INterchange (COIN) framework. The COIN architecture leverages a model of sources and receivers’ contexts in reference to a rich domain model or ontology for the description and resolution of semantic heterogeneity.
Resumo:
The Networks and Complexity in Social Systems course commences with an overview of the nascent field of complex networks, dividing it into three related but distinct strands: Statistical description of large scale networks, viewed as static objects; the dynamic evolution of networks, where now the structure of the network is understood in terms of a growth process; and dynamical processes that take place on fixed networks; that is, "networked dynamical systems". (A fourth area of potential research ties all the previous three strands together under the rubric of co-evolution of networks and dynamics, but very little research has been done in this vein and so it is omitted.) The remainder of the course treats each of the three strands in greater detail, introducing technical knowledge as required, summarizing the research papers that have introduced the principal ideas, and pointing out directions for future development. With regard to networked dynamical systems, the course treats in detail the more specific topic of information propagation in networks, in part because this topic is of great relevance to social science, and in part because it has received the most attention in the literature to date.
Resumo:
This dissertation studies the effects of Information and Communication Technologies (ICT) on the banking sector and the payments system. It provides insight into how technology-induced changes occur, by exploring both the nature and scope of main technology innovations and evidencing their economic implications for banks and payment systems. Some parts in the dissertation are descriptive. They summarise the main technological developments in the field of finance and link them to economic policies. These parts are complemented with sections of the study that focus on assessing the extent of technology application to banking and payment activities. Finally, it includes also some work which borrows from the economic literature on banking. The need for an interdisciplinary approach arises from the complexity of the topic and the rapid path of change to which it is subject. The first chapter provides an overview of the influence of developments in ICT on the evolution of financial services and international capital flows. We include main indicators and discuss innovation in the financial sector, exchange rates and international capital flows. The chapter concludes with impact analysis and policy options regarding the international financial architecture, some monetary policy issues and the role of international institutions. The second chapter is a technology assessment study that focuses on the relationship between technology and money. The application of technology to payments systems is transforming the way we use money and, in some instances, is blurring the definition of what constitutes money. This chapter surveys the developments in electronic forms of payment and their relationship to the banking system. It also analyses the challenges posed by electronic money for regulators and policy makers, and in particular the opportunities created by two simultaneous processes: the Economic and Monetary Union and the increasing use of electronic payment instruments. The third chapter deals with the implications of developments in ICT on relationship banking. The financial intermediation literature explains relationship banking as a type of financial intermediation characterised by proprietary information and multiple interactions with customers. This form of banking is important for the financing of small and medium-sized enterprises. We discuss the effects of ICT on the banking sector as a whole and then apply these developments to the case of relationship banking. The fourth chapter is an empirical study of the effects of technology on the banking business, using a sample of data from the Spanish banking industry. The design of the study is based on some of the events described in the previous chapters, and also draws from the economic literature on banking. The study shows that developments in information management have differential effects on wholesale and retail banking activities. Finally, the last chapter is a technology assessment study on electronic payments systems in Spain and the European Union. It contains an analysis of existing payment systems and ongoing or planned initiatives in Spain. It forms part of a broader project comprising a series of country-specific analyses covering ten European countries. The main issues raised across the countries serve as the starting point to discuss implications of the development of electronic money for regulation and policies, and in particular, for monetary-policy making.
Resumo:
Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.
Resumo:
Complexity is integral to planning today. Everyone and everything seem to be interconnected, causality appears ambiguous, unintended consequences are ubiquitous, and information overload is a constant challenge. The nature of complexity, the consequences of it for society, and the ways in which one might confront it, understand it and deal with it in order to allow for the possibility of planning, are issues increasingly demanding analytical attention. One theoretical framework that can potentially assist planners in this regard is Luhmann's theory of autopoiesis. This article uses insights from Luhmann's ideas to understand the nature of complexity and its reduction, thereby redefining issues in planning, and explores the ways in which management of these issues might be observed in actual planning practice via a reinterpreted case study of the People's Planning Campaign in Kerala, India. Overall, this reinterpretation leads to a different understanding of the scope of planning and planning practice, telling a story about complexity and systemic response. It allows the reinterpretation of otherwise familiar phenomena, both highlighting the empirical relevance of the theory and providing new and original insight into particular dynamics of the case study. This not only provides a greater understanding of the dynamics of complexity, but also produces advice to help planners implement structures and processes that can cope with complexity in practice.
Resumo:
This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, simple and simplistic models may produce similar outputs to more robust and disaggregated models.
Resumo:
The three decades of on-going executives’ concerns of how to achieve successful alignment between business and information technology shows the complexity of such a vital process. Most of the challenges of alignment are related to knowledge and organisational change and several researchers have introduced a number of mechanisms to address some of these challenges. However, these mechanisms pay less attention to multi-level effects, which results in a limited un-derstanding of alignment across levels. Therefore, we reviewed these challenges from a multi-level learning perspective and found that business and IT alignment is related to the balance of exploitation and exploration strategies with the intellec-tual content of individual, group and organisational levels.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.
Resumo:
A causal explanation provides information about the causal history of whatever is being explained. However, most causal histories extend back almost infinitely and can be described in almost infinite detail. Causal explanations therefore involve choices about which elements of causal histories to pick out. These choices are pragmatic: they reflect our explanatory interests. When adjudicating between competing causal explanations, we must therefore consider not only questions of epistemic adequacy (whether we have good grounds for identifying certain factors as causes) but also questions of pragmatic adequacy (whether the aspects of the causal history picked out are salient to our explanatory interests). Recognizing that causal explanations differ pragmatically as well as epistemically is crucial for identifying what is at stake in competing explanations of the relative peacefulness of the nineteenth-century Concert system. It is also crucial for understanding how explanations of past events can inform policy prescription.
Resumo:
In the past few years, libraries have started to design public programs that educate patrons about different tools and techniques to protect personal privacy. But do end user solutions provide adequate safeguards against surveillance by corporate and government actors? What does a comprehensive plan for privacy entail in order that libraries live up to their privacy values? In this paper, the authors discuss the complexity of surveillance architecture that the library institution might confront when seeking to defend the privacy rights of patrons. This architecture consists of three main parts: physical or material aspects, logical characteristics, and social factors of information and communication flows in the library setting. For each category, the authors will present short case studies that are culled from practitioner experience, research, and public discourse. The case studies probe the challenges faced by the library—not only when making hardware and software choices, but also choices related to staffing and program design. The paper shows that privacy choices intersect not only with free speech and chilling effects, but also with questions that concern intellectual property, organizational development, civic engagement, technological innovation, public infrastructure, and more. The paper ends with discussion of what libraries will require in order to sustain and improve efforts to serve as stewards of privacy in the 21st century.