919 resultados para object-oriented
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
Javan myötä ohjelmointikielten kääntämisprosessiin on uudelleen esitelty erityisen käsittelyn kohteeksi kelpaava välikieli, tavukoodi. Tavallisesti Java-ohjelmaa suoritettaessa erityinen virtuaalikone lataa tavukoodimuotoisen ohjelman esityksen, jota suoritetaan joko tulkkaamalla tai suoritusaikaisesti suoritusalustan ymmärtämälle kielelle kääntäen. Tässä tutkielmassa tutkitaan välikielen tasolla tapahtuvia optimointimahdollisuuksia. Oliokielten dynaamisen luonteen vuoksi puhtaasti staattinen optimointi on vaikeaa ja siksi usein hedelmätöntä. Tutkielman yhteydessä kuitenkin tunnistettiin mobiiliohjelmointiin soveltuva suljetun maailman oletus, jonka puitteissa tavukoodin tasolla voidaan ohjelmaa parannella turvallisesti. Esimerkkinä tutkielmassa toteutetaan ylimääräisiä rajapintaluokkia poistava optimointi. Koska optimointialgoritmit ovat usein monimutkaisia ja vaikeaselkoisia, tutkitaan työssä myös mahdollisuuksia niiden yksinkertaisempaan esittämiseen. Alunperin Javalla toteutetun luokkahierarkiaa uudelleenjärjestelevän algoritmin esiehtojen tarkastus onnistutaan kuvaamaan ensimmäisen kertaluokan logiikan kaavalla, jolloin esiehtojen tarkastus onnistuu tutkielman puitteissa toteutetulla logiikkakoneella. Logiikkakoneelle kuvataan logiikkakaavojen propositiot Javalla, mutta propositioiden yhdistely tapahtuu ja-konnektiiveja käyttävällä logiikkakielellä. Suorituskyvyltään logiikkakone on joissain tapauksissa Java-toteutusta nopeampi.
Resumo:
The loss and degradation of forest cover is currently a globally recognised problem. The fragmentation of forests is further affecting the biodiversity and well-being of the ecosystems also in Kenya. This study focuses on two indigenous tropical montane forests in the Taita Hills in southeastern Kenya. The study is a part of the TAITA-project within the Department of Geography in the University of Helsinki. The study forests, Ngangao and Chawia, are studied by remote sensing and GIS methods. The main data includes black and white aerial photography from 1955 and true colour digital camera data from 2004. This data is used to produce aerial mosaics from the study areas. The land cover of these study areas is studied by visual interpretation, pixel-based supervised classification and object-oriented supervised classification. The change of the forest cover is studied with GIS methods using the visual interpretations from 1955 and 2004. Furthermore, the present state of the study forests is assessed with leaf area index and canopy closure parameters retrieved from hemispherical photographs as well as with additional, previously collected forest health monitoring data. The canopy parameters are also compared with textural parameters from digital aerial mosaics. This study concludes that the classification of forest areas by using true colour data is not an easy task although the digital aerial mosaics are proved to be very accurate. The best classifications are still achieved with visual interpretation methods as the accuracies of the pixel-based and object-oriented supervised classification methods are not satisfying. According to the change detection of the land cover in the study areas, the area of indigenous woodland in both forests has decreased in 1955 2004. However in Ngangao, the overall woodland area has grown mainly because of plantations of exotic species. In general, the land cover of both study areas is more fragmented in 2004 than in 1955. Although the forest area has decreased, forests seem to have a more optimistic future than before. This is due to the increasing appreciation of the forest areas.
Resumo:
Road transport and infrastructure has a fundamental meaning for the developing world. Poor quality and inadequate coverage of roads, lack of maintenance operations and outdated road maps continue to hinder economic and social development in the developing countries. This thesis focuses on studying the present state of road infrastructure and its mapping in the Taita Hills, south-east Kenya. The study is included as a part of the TAITA-project by the Department of Geography, University of Helsinki. The road infrastructure of the study area is studied by remote sensing and GIS based methodology. As the principal dataset, true colour airborne digital camera data from 2004, was used to generate an aerial image mosaic of the study area. Auxiliary data includes SPOT satellite imagery from 2003, field spectrometry data of road surfaces and relevant literature. Road infrastructure characteristics are interpreted from three test sites using pixel-based supervised classification, object-oriented supervised classifications and visual interpretation. Road infrastructure of the test sites is interpreted visually from a SPOT image. Road centrelines are then extracted from the object-oriented classification results with an automatic vectorisation process. The road infrastructure of the entire image mosaic is mapped by applying the most appropriate assessed data and techniques. The spectral characteristics and reflectance of various road surfaces are considered with the acquired field spectra and relevant literature. The results are compared with the experimented road mapping methods. This study concludes that classification and extraction of roads remains a difficult task, and that the accuracy of the results is inadequate regardless of the high spatial resolution of the image mosaic used in this thesis. Visual interpretation, out of all the experimented methods in this thesis is the most straightforward, accurate and valid technique for road mapping. Certain road surfaces have similar spectral characteristics and reflectance values with other land cover and land use. This has a great influence for digital analysis techniques in particular. Road mapping is made even more complicated by rich vegetation and tree canopy, clouds, shadows, low contrast between roads and surroundings and the width of narrow roads in relation to the spatial resolution of the imagery used. The results of this thesis may be applied to road infrastructure mapping in developing countries on a more general context, although with certain limits. In particular, unclassified rural roads require updated road mapping schemas to intensify road transport possibilities and to assist in the development of the developing world.
Resumo:
Most Java programmers would agree that Java is a language that promotes a philosophy of “create and go forth”. By design, temporary objects are meant to be created on the heap, possibly used and then abandoned to be collected by the garbage collector. Excessive generation of temporary objects is termed “object churn” and is a form of software bloat that often leads to performance and memory problems. To mitigate this problem, many compiler optimizations aim at identifying objects that may be allocated on the stack. However, most such optimizations miss large opportunities for memory reuse when dealing with objects inside loops or when dealing with container objects. In this paper, we describe a novel algorithm that detects bloat caused by the creation of temporary container and String objects within a loop. Our analysis determines which objects created within a loop can be reused. Then we describe a source-to-source transformation that efficiently reuses such objects. Empirical evaluation indicates that our solution can reduce upto 40% of temporary object allocations in large programs, resulting in a performance improvement that can be as high as a 20% reduction in the run time, specifically when a program has a high churn rate or when the program is memory intensive and needs to run the GC often.
Resumo:
Máster y Doctorado en Sistemas Informáticos Avanzados, Informatika Fakultatea - Facultad de Informática
Resumo:
The paper traces the history of the different documentation media used for information dissemination. Such early media are clay tablets, papyrus, and vellum or parchment codex. The invention of printing however revolutionized the information industry, enabling the production of books in multiple copies. Photography came into documentation mainly to preserve rare materials and those that easily deteriorate. This paper reports the efforts of National Institute for Freshwater Fisheries Research (NIFFR) and Kainji Lake Fisheries Promotion Project (KLFPPP), Nigeria, to develop an Object Oriented Database (OOD) using photographs. The photographs are stored in digitized form on commercial computers, using the program ACDSee 32 for classification, description and retrieval. Specifically the paper focuses on photographs in fisheries as visual communication and expression. Presently, the database contains photo documents about the following aspects of Kainji Lake fisheries: fishing gears and crafts, fish preservation methods
Resumo:
O propósito desta Tese foi detectar e caracterizar áreas sob alto risco para leishmaniose visceral (LV) e descrever os padrões de ocorrência e difusão da doença, entre os anos de 1993 a 1996 e 2001 a 2006, em Teresina, Piauí, por meio de métodos estatísticos para análise de dados espaciais, sistemas de informações geográficas e imagens de sensoriamento remoto. Os resultados deste estudo são apresentados na forma de três manuscritos. O primeiro usou análise de dados espaciais para identificar as áreas com maior risco de LV na área urbana de Teresina entre 2001 e 2006. Os resultados utilizando razão de kernels demonstraram que as regiões periféricas da cidade foram mais fortemente afetadas ao longo do período analisado. A análise com indicadores locais de autocorrelação espacial mostrou que, no início do período de estudo, os agregados de alta incidência de LV localizavam-se principalmente na região sul e nordeste da cidade, mas nos anos seguintes os eles apareceram também na região norte da cidade, sugerindo que o padrão de ocorrência de LV não é estático e a doença pode se espalhar ocasionalmente para outras áreas do município. O segundo estudo teve como objetivo caracterizar e predizer territórios de alto risco para ocorrência da LV em Teresina, com base em indicadores socioeconômicos e dados ambientais, obtidos por sensoriamento remoto. Os resultados da classificação orientada a objeto apontam a expansão da área urbana para a periferia da cidade, onde antes havia maior cobertura de vegetação. O modelo desenvolvido foi capaz de discriminar 15 conjuntos de setores censitário (SC) com diferentes probabilidades de conterem SC com alto risco de ocorrência de LV. O subconjunto com maior probabilidade de conter SC com alto risco de LV (92%) englobou SC com percentual de chefes de família alfabetizados menor que a mediana (≤64,2%), com maior área coberta por vegetação densa, com percentual de até 3 moradores por domicílio acima do terceiro quartil (>31,6%). O modelo apresentou, respectivamente, na amostra de treinamento e validação, sensibilidade de 79% e 54%, especificidade de 74% e 71%, acurácia global de 75% e 67% e área sob a curva ROC de 83% e 66%. O terceiro manuscrito teve como objetivo avaliar a aplicabilidade da estratégia de classificação orientada a objeto na busca de possíveis indicadores de cobertura do solo relacionados com a ocorrência da LV em meio urbano. Os índices de acurácia foram altos em ambas as imagens (>90%). Na correlação da incidência da LV com os indicadores ambientais verificou-se correlações positivas com os indicadores Vegetação densa, Vegetação rasteira e Solo exposto e negativa com os indicadores Água, Urbana densa e Urbana verde, todos estatisticamente significantes. Os resultados desta tese revelam que a ocorrência da LV na periferia de Teresina está intensamente relacionada às condições socioeconômicas inadequadas e transformações ambientais decorrentes do processo de expansão urbana, favorecendo a ocorrência do vetor (Lutzomyia longipalpis) nestas regiões.
Resumo:
Este trabalho relata as estratégias e atividades realizadas em disciplinas semipresenciais desenvolvidas durante os períodos letivos de 2008 a 2012 com alunos dos cursos de Administração e de Ciências Contábeis, no ambiente virtual de aprendizagem Moodle. As disciplinas foram desenvolvidas segundo os princípios das abordagens colaborativas de aprendizagem com o objetivo de examinar as possibilidades de uso dos insólitos como estratégia de leitura e escrita. Buscou-se ainda apontar a aplicabilidade das estratégias didáticas descritas como forma de aprimorar as competências de leitura e escrita em alunos ingressantes no ensino superior e fornecer subsídios para a continuação da pesquisa. O trabalho mantém uma relação interdiscursiva com a obra Se um Viajante numa Noite de Inverno, de ítalo Calvino (1982), o que lhe possibilita não somente a titulação dos capítulos, mas também a construção sutil de uma presença que os perpassa. Do autor, retira também seis propostas ( leveza, rapidez, exatidão, visibilidade, multiplicidade e consistência) capazes de aprimorar a qualidade da comunicação em ambientes informáticos. Busca, ainda, na produção teórica de Michael Serres, um conceito singular de comunicação, algo capaz de transcender a substancialidade e de compreender e estimular a construção da presencialidade por meio de trocas e relações em ambientes virtuais de aprendizagem. Em vista disso, a pesquisa apoia-se na construção -reflexão- reconstrução de oficinas on-line que utilizam o insólito - concebido como algo surpreendente e propiciador de desestabilização - na construção de estratégias propiciadoras de aprimoramento da leitura e da produção textual de estudantes universitários. Recorre também às pesquisas desenvolvidas por Mikhail Bakhtin, Ângela Kleiman, Carla Coscarelli e Vilson Leffa, contribuições decisivas tanto na elaboração das oficinas on-line, quanto na reflexão que se tece ao longo da pesquisa. Por fim, busca apoio nos estudos sobre Estilos de Aprendizagem e na aplicação do Teste de Cloze para o aprimoramento das reflexões construídas
Resumo:
Na década de 80, o surgimento de programas de computadores mais amigáveis para usuários e produtores de informação e a evolução tecnológica fizeram com que as instituições, públicas e privadas, se aperfeiçoassem em estudos sobre sistemas de produção cartográfica apoiados por computador, visando a implementação de Sistemas de Informação Geográfica (SIG). A pouca simultaneidade de forças entre órgãos interessados, resultou em uma grande quantidade de arquivos digitais com a necessidade de padronização. Em 2007, a Comissão Nacional de Cartografia (CONCAR) homologou a Estrutura de Dados Geoespaciais Vetoriais (EDGV) a fim de minimizar o problema da falta de padronização de bases cartográficas. A presente dissertação tem como foco elaborar uma metodologia de trabalho para o processo de conversão de bases cartográficas digitais existentes no padrão da Mapoteca Topográfica Digital (MTD), do Instituto Brasileiro de Geografia e Estatística (IBGE), para o padrão da EDGV, bem como suas potencialidades e limitações para integração e padronização de bases cartográficas digitais. Será feita uma aplicação da metodologia utilizando a carta topográfica de Saquarema, na escala de 1:50.000, vetorizada na Coordenação de Cartografia (CCAR) do IBGE e disponível na Internet. Como a EDGV foi elaborada segundo técnicas de modelagem orientada a objetos, foi necessário um mapeamento para banco de dados relacional, já que este ainda é utilizado pela maioria dos usuários e produtores de informação geográfica. Um dos objetivos específicos é elaborar um esquema de banco de dados, ou seja, um banco de dados vazio contendo todas as classes de objetos, atributos e seus respectivos domínios existentes na EDGV para que possa ser utilizado no processo de produção cartográfica do IBGE. Este esquema conterá todas as descrições dos objetos e de seus respectivos atributos, além de já permitir que o usuário selecione o domínio de um determinado atributo em uma lista pré definida, evitando que ocorra erro no preenchimento dados. Esta metodologia de trabalho será de grande importância para o processo de conversão das bases cartográficas existentes no IBGE e, com isso, gerar e disponibilizar bases cartográficas no padrão da EDGV.
Resumo:
An approach to reconfiguring control systems in the event of major failures is advocated. The approach relies on the convergence of several technologies which are currently emerging: Constrained predictive control, High-fidelity modelling of complex systems, Fault detection and identification, and Model approximation and simplification. Much work is needed, both theoretical and algorithmic, to make this approach practical, but we believe that there is enough evidence, especially from existing industrial practice, for the scheme to be considered realistic. After outlining the problem and proposed solution, the paper briefly reviews constrained predictive control and object-oriented modelling, which are the essential ingredients for practical implementation. The prospects for automatic model simplification are also reviewed briefly. The paper emphasizes some emerging trends in industrial practice, especially as regards modelling and control of complex systems. Examples from process control and flight control are used to illustrate some of the ideas.
Resumo:
Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). © 2006 IEEE.
Resumo:
Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). ©2006 IEEE.
Resumo:
CAD software can be structured as a set of modular 'software tools' only if there is some agreement on the data structures which are to be passed between tools. Beyond this basic requirement, it is desirable to give the agreed structures the status of 'data types' in the language used for interactive design. The ultimate refinement is to have a data management capability which 'understands' how to manipulate such data types. In this paper the requirements of CACSD are formulated from the point of view of Database Management Systems. Progress towards meeting these requirements in both the DBMS and the CACSD community is reviewed. The conclusion reached is that there has been considerable movement towards the realisation of software tools for CACSD, but that this owes more to modern ideas about programming languages, than to DBMS developments. The DBMS field has identified some useful concepts, but further significant progress is expected to come from the exploitation of concepts such as object-oriented programming, logic programming, or functional programming.
Resumo:
Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.