992 resultados para Linux From Scratch
Resumo:
Orthodox economic models are good if one accepts the ceteris paribus. But this is possible if we accept that cultures are uniform, exogenous to the economic and the consumer has features such as equality of rational structure, among others. Similarly it is assumed mobility and economic geography optimizations when it comes to internal demographic flows and the migration as international movements, and generally any expression of human activity. Time is needed to take each event as "almost one-off", even though some aspects have similarities that prevent theoretically start from scratch in each analysis presented.What determines what is different? On the one hand, the historical basis assumed from the reasons of social construction of market, the reasons of social construction since ancient thought and the reasons for government social construction. Furthermore, the decisions, from the historical basis, are made by agents and contributions of the agencies, at their level, degree and intensity, to iterate it consolidate or change the social interactions and / or institutions established as and in accordance with rules, routines and habits. This is why it is interesting to develop the "neighborhood effects" to help understand the social mobility of the Department of Cundinamarca. This in order to contribute to resolving the question about why the mobility of people and what effect between territoriality and another and to itself?.
Resumo:
Muchas de las nuevas aplicaciones emergentes de Internet tales como TV sobre Internet, Radio sobre Internet,Video Streamming multi-punto, entre otras, necesitan los siguientes requerimientos de recursos: ancho de banda consumido, retardo extremo-a-extremo, tasa de paquetes perdidos, etc. Por lo anterior, es necesario formular una propuesta que especifique y provea para este tipo de aplicaciones los recursos necesarios para su buen funcionamiento. En esta tesis, proponemos un esquema de ingeniería de tráfico multi-objetivo a través del uso de diferentes árboles de distribución para muchos flujos multicast. En este caso, estamos usando la aproximación de múltiples caminos para cada nodo egreso y de esta forma obtener la aproximación de múltiples árboles y a través de esta forma crear diferentes árboles multicast. Sin embargo, nuestra propuesta resuelve la fracción de la división del tráfico a través de múltiples árboles. La propuesta puede ser aplicada en redes MPLS estableciendo rutas explícitas en eventos multicast. En primera instancia, el objetivo es combinar los siguientes objetivos ponderados dentro de una métrica agregada: máxima utilización de los enlaces, cantidad de saltos, el ancho de banda total consumido y el retardo total extremo-a-extremo. Nosotros hemos formulado esta función multi-objetivo (modelo MHDB-S) y los resultados obtenidos muestran que varios objetivos ponderados son reducidos y la máxima utilización de los enlaces es minimizada. El problema es NP-duro, por lo tanto, un algoritmo es propuesto para optimizar los diferentes objetivos. El comportamiento que obtuvimos usando este algoritmo es similar al que obtuvimos con el modelo. Normalmente, durante la transmisión multicast los nodos egresos pueden salir o entrar del árbol y por esta razón en esta tesis proponemos un esquema de ingeniería de tráfico multi-objetivo usando diferentes árboles para grupos multicast dinámicos. (en el cual los nodos egresos pueden cambiar durante el tiempo de vida de la conexión). Si un árbol multicast es recomputado desde el principio, esto podría consumir un tiempo considerable de CPU y además todas las comuicaciones que están usando el árbol multicast serán temporalmente interrumpida. Para aliviar estos inconvenientes, proponemos un modelo de optimización (modelo dinámico MHDB-D) que utilice los árboles multicast previamente computados (modelo estático MHDB-S) adicionando nuevos nodos egreso. Usando el método de la suma ponderada para resolver el modelo analítico, no necesariamente es correcto, porque es posible tener un espacio de solución no convexo y por esta razón algunas soluciones pueden no ser encontradas. Adicionalmente, otros tipos de objetivos fueron encontrados en diferentes trabajos de investigación. Por las razones mencionadas anteriormente, un nuevo modelo llamado GMM es propuesto y para dar solución a este problema un nuevo algoritmo usando Algoritmos Evolutivos Multi-Objetivos es propuesto. Este algoritmo esta inspirado por el algoritmo Strength Pareto Evolutionary Algorithm (SPEA). Para dar una solución al caso dinámico con este modelo generalizado, nosotros hemos propuesto un nuevo modelo dinámico y una solución computacional usando Breadth First Search (BFS) probabilístico. Finalmente, para evaluar nuestro esquema de optimización propuesto, ejecutamos diferentes pruebas y simulaciones. Las principales contribuciones de esta tesis son la taxonomía, los modelos de optimización multi-objetivo para los casos estático y dinámico en transmisiones multicast (MHDB-S y MHDB-D), los algoritmos para dar solución computacional a los modelos. Finalmente, los modelos generalizados también para los casos estático y dinámico (GMM y GMM Dinámico) y las propuestas computacionales para dar slución usando MOEA y BFS probabilístico.
Resumo:
Não é novidade que o paradigma vigente baseia-se na Internet, em que cada vez mais aplicações mudam o seu modelo de negócio relativamente a licenciamento e manutenção, para passar a oferecer ao utilizador final uma aplicação mais acessível no que concerne a licenciamento e custos de manutenção, já que as aplicações se encontram distribuídas eliminando os custos de capitais e operacionais inerentes a uma arquitetura centralizada. Com a disseminação das Interfaces de Programação de Aplicações (Application Programming Interfaces – API) baseadas na Internet, os programadores passaram a poder desenvolver aplicações que utilizam funcionalidades disponibilizadas por terceiros, sem terem que as programar de raiz. Neste conceito, a API das aplicações Google® permitem a distribuição de aplicações a um mercado muito vasto e a integração com ferramentas de produtividade, sendo uma oportunidade para a difusão de ideias e conceitos. Este trabalho descreve o processo de conceção e implementação de uma plataforma, usando as tecnologias HTML5, Javascript, PHP e MySQL com integração com ®Google Apps, com o objetivo de permitir ao utilizador a preparação de orçamentos, desde o cálculo de preços de custo compostos, preparação dos preços de venda, elaboração do caderno de encargos e respetivo cronograma.
Resumo:
Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.
Resumo:
In an era of fragmenting audience and diversified viewing platforms, youth television needs to move fast and make a lot of noise in order to capture and maintain the attention of the teenage viewer. British ensemble youth drama Skins (E4, 2007-2013) calls attention to itself with its high doses of drugs, chaotic parties and casual attitudes towards sexuality. It also moves quickly, shedding its cast every two seasons as they graduate from school, then renewing itself with a fresh generation of 16 year old characters - three cycles in total. This essay will explore the challenges of maintaining audience connections whilst resetting the narrative clock with each cycle. I suggest that the development of the Skins brand was key to the programme’s success. Branding is particularly important for an audience demographic who increasingly consume their television outside of broadcast flow and essential for a programme which renews its cast every two years. The Skins brand operate as a framework, as the central audience draw, have the strength to maintain audience connections when it ‘graduates’ those characters they identify with at the close of each cycle and starts again from scratch. This essay will explore how the Skins brand constructs a cohesive identity across its multiple generations, yet also consider how the cyclic form poses challenges for the programme’s representations and narratives. This cyclic form allows Skins to repeatedly reach out to a new audience who comes of age alongside each new generation and to reflect shifts in British youth culture. Thus Skins remains ever-youthful, seeking to maintain an at times painfully hip identity. Yet the programme has a somewhat schizophrenic identity, torn between its roots in British realist drama and surrealist comedy and an escapist aspirational glamour that shows the influence of US Teen TV. This combination results in a tendency towards a heightened melodrama at odds with Skins claims for authenticity - its much vaunted teenage advisors and young writers - with the cyclic structure serving to amplify the programme’s excessive tendencies. Each cycle wrestles with a need for continuity and familiarity - partly maintained through brand, aesthetic and setting - yet a desire for freshness and originality, to assert difference from what has gone before. I suggest that the inevitable need for each cycle to ‘top’ what has gone before results in a move away from character-based intimacy and the everyday to high-stakes drama and violence which sits uncomfortably within British youth television.
Resumo:
The outcome of the UK’s referendum on continued EU membership is at the time of writing uncertain, and the consequences of a vote to remain (‘Bremain’) or leave (‘Brexit’) difficult to predict. Polarised views have been voiced about the impact of Brexit on UK agriculture, and on the nature and level of funding, of future policy. Policymakers would not have the luxury of devising a new policy from scratch. WTO rules and commitments, the nature of any future accord with the EU, budget constraints, the rather different perspectives of the UK’s devolved administrations in Scotland, Wales and Northern Ireland, and the expectations of farmers, landowners and the environmental lobby, will all impact the policymaking process. The WTO dimension, and the UK’s future relationship with the EU, are particularly difficult to predict, and – some commentators believe – may take years to resolve. Brexit’s impact on the future CAP is also unclear. A vote to remain within the EU would not necessarily assuage the Eurosceptics’ criticisms of the EU, or the UK’s perception of the CAP. Whatever the outcome, future agricultural, food and rural land use policies will remain key preoccupations of European governments.
Resumo:
We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved
Resumo:
This thesis examines the concept of tie strength and investigates how it can be determined on the fly in the Facebook Social Network Service (SNS) by a system constructed using the standard developer API. We analyze and compare two different models: the first one is an adaptation of previous literature (Gilbert & Karahalios, 2009), the second model is built from scratch and based on a dataset obtained from an online survey. This survey took the form of a Facebook application that collected subjective ratings of the strength of 1642 ties (friendships) from 85 different participants. The new tie strength model was built based on this dataset by using a multiple regression method. We saw that the new model performed slightly better than the original adapted model, plus it had the advantage of being easier to implement. In conclusion, this thesis has shown that tie strength models capable of serving as useful friendship predictors are easily implementable in a Facebook application via standard API calls. In addition to a new tie strength model, the methodology adopted in this work permitted observation of the weights of each predictive variable used in the model, increasing the visibility of the factors that affects peoples’ relationships in online social networks.
Resumo:
In this work, we propose a two-stage algorithm for real-time fault detection and identification of industrial plants. Our proposal is based on the analysis of selected features using recursive density estimation and a new evolving classifier algorithm. More specifically, the proposed approach for the detection stage is based on the concept of density in the data space, which is not the same as probability density function, but is a very useful measure for abnormality/outliers detection. This density can be expressed by a Cauchy function and can be calculated recursively, which makes it memory and computational power efficient and, therefore, suitable for on-line applications. The identification/diagnosis stage is based on a self-developing (evolving) fuzzy rule-based classifier system proposed in this work, called AutoClass. An important property of AutoClass is that it can start learning from scratch". Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for AutoClass (the number may grow, with new class labels being added by the on-line learning process), in a fully unsupervised manner. In the event that an initial rule base exists, AutoClass can evolve/develop it further based on the newly arrived faulty state data. In order to validate our proposal, we present experimental results from a level control didactic process, where control and error signals are used as features for the fault detection and identification systems, but the approach is generic and the number of features can be significant due to the computationally lean methodology, since covariance or more complex calculations, as well as storage of old data, are not required. The obtained results are significantly better than the traditional approaches used for comparison
Resumo:
This paper reports the ongoing project (since 2002) of developing a wordnet for Brazilian Portuguese (Wordnet.Br) from scratch. In particular, it describes the process of constructing the Wordnet.Br core database, which has 44,000 words organized in 18,500 synsets Accordingly, it briefly sketches the project overall methodology, its lexical resourses, the synset compilation process, and the Wordnet.Br editor, a GUI (graphical user interface) which aids the linguist in the compilation and maintenance of the Wordnet.Br. It concludes with the planned further work.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
The present PhD thesis summarizes two examples of research in microfluidics. Both times water was the subject of interest, once in the liquid state (droplets adsorbed on chemically functionalized surfaces), the other time in the solid state (ice snowflakes and their fractal behaviour). The first problem deals with a slipping nano-droplet of water adsorbed on a surface with photo-switchable wettability characteristics. Main focus was on identifying the underlying driving forces and mechanical principles at the molecular level of detail. Molecular Dynamics simulation was employed as investigative tool owing to its record of successfully describing the microscopic behaviour of liquids at interfaces. To reproduce the specialized surface on which a water droplet can effectively “walk”, a new implicit surface potential was developed. Applying this new method the experimentally observed droplet slippage could be reproduced successfully. Next the movement of the droplet was analyzed at various conditions emphasizing on the behaviour of the water molecules in contact with the surface. The main objective was to identify driving forces and molecular mechanisms underlying the slippage process. The second part of this thesis is concerned with theoretical studies of snowflake melting. In the present work snowflakes are represented by filled von Koch-like fractals of mesoscopic beads. A new algorithm has been developed from scratch to simulate the thermal collapse of fractal structures based on Monte Carlo and Random Walk Simulations (MCRWS). The developed method was applied and compared to Molecular Dynamics simulations regarding the melting of ice snowflake crystals and new parameters were derived from this comparison. Bigger snow-fractals were then studied looking at the time evolution at different temperatures again making use of the developed MCRWS method. This was accompanied by an in-depth analysis of fractal properties (border length and gyration radius) in order to shed light on the dynamics of the melting process.
Resumo:
Con gli strumenti informatici disponibili oggi per le industrie, in particolar modo coi software CAE, le possibile simulare in maniera più che soddisfacente i fenomeni fisici presenti in natura. Anche il raffreddamento di un manufatto in polimero può venire simulato, a patto che si conoscano tutti i dati dei materiali e delle condizioni al contorno. Per quanto riguarda i dati dei materiali, i produttori di polimeri sono molto spesso in grado di fornirli, mentre le condizioni al contorno devono essere padroneggiate dal detentore della tecnologia. Nella pratica, tale conoscenza è al più incompleta, quindi si fanno ipotesi per colmare le lacune. Una tra le ipotesi più forti fatte è quella di una perfetta conduzione all'interfaccia tra due corpi. Questo è un vincolo troppo forte, se paragonato alla precisione di tutti gli altri dati necessari alla simulazione, e quindi si è deciso di eseguire una campagna sperimentale per stimare la resistenza al passaggio flusso termico all'interfaccia polimero-stampo ovvero determinare la conduttanza termica di contatto. L'attività svolta in questa tesi di dottorato ha come scopo quello di fornire un contributo significativo allo sviluppo e al miglioramento dell'efficienza termica degli stampi di formatura dei polimeri termoplastici con tecnologia a compressione.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
With research on Wireless Sensor Networks (WSNs) becoming more and more mature in the past five years, researchers from universities all over the world have set up testbeds of wireless sensor networks, in most cases to test and evaluate the real-world behavior of developed WSN protocol mechanisms. Although these testbeds differ heavily in the employed sensor node types and the general architectural set up, they all have similar requirements with respect to management and scheduling functionalities: as every shared resource, a testbed requires a notion of users, resource reservation features, support for reprogramming and reconfiguration of the nodes, provisions to debug and remotely reset sensor nodes in case of node failures, as well as a solution for collecting and storing experimental data. The TARWIS management architecture presented in this paper targets at providing these functionalities independent from node type and node operating system. TARWIS has been designed as a re-usable management solution for research and/or educational oriented research testbeds of wireless sensor networks, relieving researchers intending to deploy a testbed from the burden to implement their own scheduling and testbed management solutions from scratch.