119 resultados para Programmer
Resumo:
Målet for vannforvaltning er å opprettholde god tilstand i vann og forbedre tilstanden i vann der hvor den er svekket. Det pågår en andre planleggingsrunde i vannforvaltning. Under denne runden skal planene kontrolleres og vannforvaltningsplaner frem til 2021 utarbeides. Vannregionen Tana-Neiden-Pasvik på finsk side strekker seg også til norske og russiske områder og planene skal koordineres. Dette dokumentet inneholder planlagt arbeidsprogram og sentrale spørsmål for vannregionen på finsk side. Dokumentet sendes på høring fra 15. juni til 17. desember 2012. Til hjelp i planleggingen ønskes blant annet tilbakemelding om gjennomføring og tidsplan for planlegging og påvirkningsmuligheter; saker knyttet til utarbeiding av miljøredegjørelse og innhold; sentrale problemer og utviklingsbehov for vanntilstand; midler og tiltak for å kunne bedre tilstanden til vann og muligheter for finansiering og samarbeid. Endringer i virksomhetsmiljøet påvirker definering og vektlegging av sentrale spørsmål for andre planperiode. Ny lovgivning har kommet til og det er etter første planrunde blitt gjennomført eller startet flere nye programmer og strategier som påvirker vannforvaltning. Det kan nevnes f.eks. vannforbedringsstrategi og strategi for vandringsveier for fisk. Det skal åpnes nye gruver i vannregionen. De største endringene er kommet som følge av gruvedriften på russisk side av Pasvikelva. Innstramninger i statsøkonomien reduserer finansieringsmulighetene til vannforvaltning for offentlig sektor. For eksempel forutsetter videre arbeid med vassdragsrehabilitering i fremtiden at man klarer å knytte nye aktører til det. Sentrale spørsmål for vannforvaltning i vannregionen Tana-Neiden-Pasvik er knyttet til forbedring av vannforsyning for boligområder og vern av grunnvann; håndtering av vassdragsbelastning; reduksjon av ulemper ved vassdragsutbygging og regulering; forebygging av spredning av fremmede arter og fiskesykdommer og utarbeiding av målsettinger for vannforvaltning og håndtering av flomrisiko. Mer informasjon om vannforvaltning på nettadressen: www.ymparisto.fi/lap/vesienhoito.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
This thesis will introduce a new strongly typed programming language utilizing Self types, named Win--*Foy, along with a suitable user interface designed specifically to highlight language features. The need for such a programming language is based on deficiencies found in programming languages that support both Self types and subtyping. Subtyping is a concept that is taken for granted by most software engineers programming in object-oriented languages. Subtyping supports subsumption but it does not support the inheritance of binary methods. Binary methods contain an argument of type Self, the same type as the object itself, in a contravariant position, i.e. as a parameter. There are several arguments in favour of introducing Self types into a programming language (11. This rationale led to the development of a relation that has become known as matching [4, 5). The matching relation does not support subsumption, however, it does support the inheritance of binary methods. Two forms of matching have been proposed (lJ. Specifically, these relations are known as higher-order matching and I-bound matching. Previous research on these relations indicates that the higher-order matching relation is both reflexive and transitive whereas the f-bound matching is reflexive but not transitive (7]. The higher-order matching relation provides significant flexibility regarding inheritance of methods that utilize or return values of the same type. This flexibility, in certain situations, can restrict the programmer from defining specific classes and methods which are based on constant values [21J. For this reason, the type This is used as a second reference to the type of the object that cannot, contrary to Self, be specialized in subclasses. F-bound matching allows a programmer to define a function that will work for all types of A', a subtype of an upper bound function of type A, with the result type being dependent on A'. The use of parametric polymorphism in f-bound matching provides a connection to subtyping in object-oriented languages. This thesis will contain two main sections. Firstly, significant details concerning deficiencies of the subtype relation and the need to introduce higher-order and f-bound matching relations into programming languages will be explored. Secondly, a new programming language named Win--*Foy Functional Object-Oriented Programming Language has been created, along with a suitable user interface, in order to facilitate experimentation by programmers regarding the matching relation. The construction of the programming language and the user interface will be explained in detail.
Resumo:
Distribués sous des licences permissives qui assurent des droits d'utilisation, de modification et de redistribution aux licenciés, l'élaboration des logiciels libres est fondée sur un modèle de développement décentralisé. Ces caractéristiques posent de nombreux défis au milieu juridique, particulièrement en ce qui a trait à la responsabilité civile. Ainsi, les développeurs se demandent dans quelles circonstances leur responsabilité civile peut être engagée suite à la défaillance de leur logiciel libre. De la même façon, ils questionnent la possibilité d'appliquer cette responsabilité à un nombre important de développeurs dispersés aux quatre coins du globe. L'analyse présentée montre que le droit, tel qu'il existe actuellement, est en mesure de résoudre la majorité des problèmes relatifs à la détermination et l'application de la responsabilité civile en matière de logiciels libres. Les règles de la responsabilité civile représentent donc un risque potentiel pour les développeurs de logiciels libres, même s'ils sont relativement bien protégés par les contextes juridiques et factuels.
Resumo:
Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.
Resumo:
Ce mémoire a pour thèse que les fonctions devraient être transparentes lors de la phase de métaprogrammation. En effet, la métaprogrammation se veut une possibilité pour le programmeur d’étendre le compilateur. Or, dans un style de programmation fonctionnelle, la logique du programme se retrouve dans les définitions des diverses fonctions le composant. Puisque les fonctions sont généralement opaques, l’impossibilité d’accéder à cette logique limite les applications possibles de la phase de métaprogrammation. Nous allons illustrer les avantages que procurent les fonctions transparentes pour la métaprogrammation. Nous donnerons notamment l’exemple du calcul symbolique et un exemple de nouvelles optimisations désormais possibles. Nous illustrerons également que la transparence des fonctions permet de faire le pont entre les datatypes du programme et les fonctions. Nous allons également étudier ce qu'implique la présence de fonctions transparentes au sein d'un langage. Nous nous concentrerons sur les aspects reliés à l'implantation de ces dernières, aux performances et à la facilité d'utilisation. Nous illustrerons nos propos avec le langage Abitbol, un langage créé sur mesure pour la métaprogrammation.
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
The dataflow model of computation exposes and exploits parallelism in programs without requiring programmer annotation; however, instruction- level dataflow is too fine-grained to be efficient on general-purpose processors. A popular solution is to develop a "hybrid'' model of computation where regions of dataflow graphs are combined into sequential blocks of code. I have implemented such a system to allow the J-Machine to run Id programs, leaving exposed a high amount of parallelism --- such as among loop iterations. I describe this system and provide an analysis of its strengths and weaknesses and those of the J-Machine, along with ideas for improvement.
Resumo:
Linear graph reduction is a simple computational model in which the cost of naming things is explicitly represented. The key idea is the notion of "linearity". A name is linear if it is only used once, so with linear naming you cannot create more than one outstanding reference to an entity. As a result, linear naming is cheap to support and easy to reason about. Programs can be translated into the linear graph reduction model such that linear names in the program are implemented directly as linear names in the model. Nonlinear names are supported by constructing them out of linear names. The translation thus exposes those places where the program uses names in expensive, nonlinear ways. Two applications demonstrate the utility of using linear graph reduction: First, in the area of distributed computing, linear naming makes it easy to support cheap cross-network references and highly portable data structures, Linear naming also facilitates demand driven migration of tasks and data around the network without requiring explicit guidance from the programmer. Second, linear graph reduction reveals a new characterization of the phenomenon of state. Systems in which state appears are those which depend on certain -global- system properties. State is not a localizable phenomenon, which suggests that our usual object oriented metaphor for state is flawed.
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
En este texto se presentan algunos conceptos y marcos teóricos útiles para el análisis del trabajo en ergonomía. El objetivo es mostrar los conceptos de base para el estudio del trabajo en la tradición de la ergonomía de la actividad, y analizar de manera general algunos de los modelos empleados para el análisis de una actividad de trabajo. Inicialmente se abordan los principios teóricos de la ergonomía y los principios que provienen de la fisiología, la biomecánica, la psicología y la sociología; también se presentan los acercamientos metodológicos empleados en esta misma perspectiva para el análisis de actividades de trabajo. Se parte del principio de que un estudio ergonómico del trabajo se puede llevar a cabo desde una doble perspectiva: la perspectiva analítica y la perspectiva comprensiva.
Resumo:
This article describes a case study involving information technology managers and their new programmer recruitment policy, but the primary interest is methodological. The processes of issue generation and selection and model conceptualization are described. Early use of “magnetic hexagons” allowed the generation of a range of issues, most of which would not have emerged if system dynamics elicitation techniques had been employed. With the selection of a specific issue, flow diagraming was used to conceptualize a model, computer implementation and scenario generation following naturally. Observations are made on the processes of system dynamics modeling, particularly on the need to employ general techniques of knowledge elicitation in the early stages of interventions. It is proposed that flexible approaches should be used to generate, select, and study the issues, since these reduce any biasing of the elicitation toward system dynamics problems and also allow the participants to take up the most appropriate problem- structuring approach.
Resumo:
O presente estudo destaca a prática do esporte como elemento básico para o alcance das metas educacionais entendidas pelo sistema. Enfocam-se pontos tais como a motivação, a interação social e individualização da aprendizagem, tendo a prática esportiva como fio condutor das questões. Discutimos, ainda, as possibilidades físicas atuando com o fim de melhor aproveitar todas as atividades desempenhadas pelo sujeito. Visto que o Homem tem uma atração especial pelo jogar, o ponto principal de nosso estudo reside na tentativa /de utilização desta tendência natural para o melhor direcionamento de suas atividades cotidianas. Busca- se, em determinado ponto do trabalho, avaliar a profundidade com que o assunto é tratado pelas leis de ensino e, mesmo, pela Constituição de 1988. Por fim, pretende alertar a sociedade acerca da pertinência e relevância de assunto de tal importância para o bom desenvolvimento do educando.
Resumo:
The subjective well- being (SWB) is formed by global judgments of satisfaction with life, or with peculiar domains the positive and the negative emotional experiences. The perception in turn, is the process of interpretive process of sensory data with cognitive or informative sense, absorbed in function of a context. From this perspective, the research aimed to evaluate SWB and the perception of advanced age pregnant women. Participated in the survey 80 pregnant with 35 years old or older (Group A or older) and 80 pregnant aged between 20 and 34 years old (Group B or young adults). The instruments used were: the scale of subjective well-being and a questionnaire, that included sociodemographic informations, items about pregnancy and a statement based on the Free Association of Words Technique (FAWT) to approach the perception of pregnancy. The data from the questionnaire and scale, in order to compare the data between groups suffered descriptive and inferential statistical analyzes. The analyzes performed with chi-square test among groups, which had values that were statistically significant, with the sociodemographic variables the type of contraceptive and health problems. The indicators of the SWB had further more by means in groups. The results of the Wilcoxon's test that there were no differences between the groups referred above. In the relation indicators of well being with variables age, education and income, some associations were significant. In addition, the words derived from the (FAWT) were analyzed using the software Programmer s Permenttant l´Analyse des Evocation (EVOC2000) and categorized according to content analysis of Bardin in three thematic categories (positive and negative affects, perception of gestation and implications of pregnancy) discussed as a group, since most of words were common. The study highlights shown how similar were the presented data by pregnant women surveyed. It is supposed about this fact the similarity is related to the social context. The relevance of this study for the health care network is to help with proposals aimed at specific improvements to the public and the sector, beside demonstrate that advanced age pregnant women and young adulst in the researched context showed no differences in the most of the studied characteristics
Resumo:
Pós-graduação em Pesquisa e Desenvolvimento (Biotecnologia Médica) - FMB