930 resultados para object-oriented software framework
Resumo:
Se ha realizado el modelado orientado a objetos del sistema de control cardiovascular en situaciones de diálisis aplicando una analogía eléctrica en el que se emplean componentes conectados mediante interconexiones. En este modelado se representan las ecuaciones diferenciales del sistema cardiovascular y del sistema de control barorreceptor así como las ecuaciones dinámicas del intercambio de fluidos y solutos del sistema hemodializador. A partir de este modelo se ha realizado experiencias de simulación en condiciones normales y situaciones de hemorragias, transfusiones de sangre y de ultrafiltración e infusión de fluido durante tratamiento de hemodiálisis. Los resultados obtenidos muestran en primer lugar la efectividad del sistema barorreceptor para compensar la hipotensión arterial inducida por los episodios de hemorragia y transfusión de sangre. En segundo lugar se muestra la respuesta del sistema de control ante diferentes tasas de ultrafiltración durante la hemodiálisis y se sugieren valores óptimos para la adecuada operación.
Resumo:
136 p.
Resumo:
Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).
Resumo:
Above ground biomass is frequently estimated with forest inventory data and an extrapolation method for the per unit area evaluations. This procedure is labour demanding and costly. In this study above ground biomass functions, whose independent variable is crown horizontal projection, were developed. Multi-resolution segmentation method and object-oriented classification, based on very high spatial resolution satellite images, were used to obtain the area of tree crown horizontal projection for umbrella pine (Pinus pinea L.). A set of inventory plots were measured and with existing allometric functions for this species above ground biomass per tree and per plot were calculated. The two data sets were used to fit linear functions both for individual plot and their cumulative values. The results show a good performance of the models. Errors smaller than 10% are obtained for stand areas greater than 1.4 ha. These functions have the advantages of estimating above ground biomass for all the area under study or surveillance, not requiring forest inventory; allow monitoring in short time periods; and are easily implemented in a geographical information system environment.
Resumo:
Forest biomass has been having an increasing importance in the world economy and in the evaluation of the forests development and monitoring. It was identified as a global strategic reserve, due to its applications in bioenergy, bioproduct development and issues related to reducing greenhouse gas emissions. The estimation of above ground biomass is frequently done with allometric functions per species with plot inventory data. An adequate sampling design and intensity for an error threshold is required. The estimation per unit area is done using an extrapolation method. This procedure is labour demanding and costly. The mail goal of this study is the development of allometric functions for the estimation of above ground biomass with ground cover as independent variable, for forest areas of holm aok (Quercus rotundifolia), cork oak (Quercus suber) and umbrella pine (Pinus pinea) in multiple use systems. Ground cover per species was derived from crown horizontal projection obtained by processing high resolution satellite images, orthorectified, geometrically and atmospheric corrected, with multi-resolution segmentation method and object oriented classification. Forest inventory data were used to estimate plot above ground biomass with published allometric functions at tree level. The developed functions were fitted for monospecies stands and for multispecies stands of Quercus rotundifolia and Quercus suber, and Quercus suber and Pinus pinea. The stand composition was considered adding dummy variables to distinguish monospecies from multispecies stands. The models showed a good performance. Noteworthy is that the dummy variables, reflecting the differences between species, originated improvements in the models. Significant differences were found for above ground biomass estimation with the functions with and without the dummy variables. An error threshold of 10% corresponds to stand areas of about 40 ha. This method enables the overall area evaluation, not requiring extrapolation procedures, for the three species, which occur frequently in multispecies stands.
Resumo:
Cancer is a challenging disease that involves multiple types of biological interactions in different time and space scales. Often computational modelling has been facing problems that, in the current technology level, is impracticable to represent in a single space-time continuum. To handle this sort of problems, complex orchestrations of multiscale models is frequently done. PRIMAGE is a large EU project that aims to support personalized childhood cancer diagnosis and prognosis. The goal is to do so predicting the growth of the solid tumour using multiscale in-silico technologies. The project proposes an open cloud-based platform to support decision making in the clinical management of paediatric cancers. The orchestration of predictive models is in general complex and would require a software framework that support and facilitate such task. The present work, proposes the development of an updated framework, referred herein as the VPH-HFv3, as a part of the PRIMAGE project. This framework, a complete re-writing with respect to the previous versions, aims to orchestrate several models, which are in concurrent development, using an architecture as simple as possible, easy to maintain and with high reusability. This sort of problem generally requires unfeasible execution times. To overcome this problem was developed a strategy of particularisation, which maps the upper-scale model results into a smaller number and homogenisation which does the inverse way and analysed the accuracy of this approach.
Resumo:
The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.
Resumo:
LHC experiments produce an enormous amount of data, estimated of the order of a few PetaBytes per year. Data management takes place using the Worldwide LHC Computing Grid (WLCG) grid infrastructure, both for storage and processing operations. However, in recent years, many more resources are available on High Performance Computing (HPC) farms, which generally have many computing nodes with a high number of processors. Large collaborations are working to use these resources in the most efficient way, compatibly with the constraints imposed by computing models (data distributed on the Grid, authentication, software dependencies, etc.). The aim of this thesis project is to develop a software framework that allows users to process a typical data analysis workflow of the ATLAS experiment on HPC systems. The developed analysis framework shall be deployed on the computing resources of the Open Physics Hub project and on the CINECA Marconi100 cluster, in view of the switch-on of the Leonardo supercomputer, foreseen in 2023.
Resumo:
Product lifecycle management (PLM) innovates as it defines both the product as a central element to aggregate enterprise information and the lifecycle as a new time dimension for information integration and analysis. Because of its potential benefits to shorten innovation lead-times and to reduce costs, PLM has attracted a lot of attention at industry and at research. However, the current PLM implementation stage at most organisations still does not apply the lifecycle management concepts thoroughly. In order to close the existing realisation gap, this article presents a process oriented framework to support effective PLM implementation. The framework central point consists of a set of lifecycle oriented business process reference models which links the necessary fundamental concepts, enterprise knowledge and software solutions to effectively deploy PLM. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Diplomityössä luodaan viitekehys tuotetiedonhallintajärjestelmän esisuunnittelua varten. Siinä on kolme ulottuvuutta: lisäarvontuotto-, toiminnallisuus- ja ohjelmistoulottuvuus. Viitekehys auttaa- tunnistamaan lisäarvontuottokomponentit, joihin voidaan vaikuttaa tiettyjen ohjelmistoluokkien tarjoamilla tuotetiedonhallintatoiminnallisuuksilla. Viitekehyksen järjestelmäsuunnittelullista näkökulmaa hyödynnetään tutkittavissa yritystapauksissa perustuen laskentamatriisin muotoon mallinnettuihin ulottuvuuksien välisiin suhteisiin. Matriisiin syötetään lisäarvontuotto- ja toiminnallisuuskomponenttien saamat tärkeydet kohdeyrityksessä suoritetussa haastattelututkimuksessa. Matriisin tuotos on tietyn ohjelmiston soveltuvuus kyseisen yrityksen tapauksessa. Soveltuvuus on joukko tunnuslukuja, jotka analysoidaan tulostenkäsittelyvaiheessa. Soveltuvuustulokset avustavat kohdeyritystä sen valitessa lähestymistapaansa tuotetiedonhallintaan - ja kuvaavat esisuunnitellun tuotetiedonhallintajärjestelmän. Viitekehyksen rakentaminen vaatii perinpohjaisen lähestymistavan merkityksellisten lisäarvontuotto- ja toiminnallisuuskomponenttien sekä ohjelmistoluokkien määrittämiseen. Määritystyö perustuu työssä yksityiskohtaisesti laadittujen menetelmien ja komponenttiryhmitysten hyödyntämiselle. Kunkin alueen analysointi mahdollistaa viitekehyksen ja laskentamatriisin rakentamisen yhdenmukaisten määritysten perusteella. Viitekehykselle on ominaista sen muunneltavuus. Nykymuodossaan se soveltuu elektroniikka- ja high-tech yrityksille. Viitekehystä voidaan hyödyntää myös muilla toimialoilla muokkaamalla lisäarvontuottokomponentteja kunkin toimialan intressien mukaisesti. Vastaavasti analysoitava ohjelmisto voidaan valita tapauskohtaisesti. Laskentamatriisi on kuitenkin ensin päivitettävä valitun ohjelmiston kyvykkyyksillä, minkä jälkeen viitekehys voi tuottaa soveltuvuustuloksia kyseiseen yritystapaukseen perustuen
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
It is difficult, if not impossible, to find something that is not changing in computer technology: circuits, architectures, languages, methods, fields of application ... The "central object" itself of this brand of engineering, software, represents such a diverse reality (many objects) that the fact that it has only one name gives rise to considerable confusion. This issue, among others, was taken up by Fox (1) and, at this point, I would like to underline that it is more of a pragmatic issue than an academic one. Thus, Software Engineering Education moves in an unstable, undefined'world. This axiom governs and limits the. validity of all educational proposals in the area of Software Engineering and, thereforer all the ideas presented in this paper.
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.
Resumo:
The XSophe-Sophe-XeprView((R)) computer simulation software suite enables scientists to easily determine spin Hamiltonian parameters from isotropic, randomly oriented and single crystal continuous wave electron paramagnetic resonance (CW EPR) spectra from radicals and isolated paramagnetic metal ion centers or clusters found in metalloproteins, chemical systems and materials science. XSophe provides an X-windows graphical user interface to the Sophe programme and allows: creation of multiple input files, local and remote execution of Sophe, the display of sophelog (output from Sophe) and input parameters/files. Sophe is a sophisticated computer simulation software programme employing a number of innovative technologies including; the Sydney OPera HousE (SOPHE) partition and interpolation schemes, a field segmentation algorithm, the mosaic misorientation linewidth model, parallelization and spectral optimisation. In conjunction with the SOPHE partition scheme and the field segmentation algorithm, the SOPHE interpolation scheme and the mosaic misorientation linewidth model greatly increase the speed of simulations for most spin systems. Employing brute force matrix diagonalization in the simulation of an EPR spectrum from a high spin Cr(III) complex with the spin Hamiltonian parameters g(e) = 2.00, D = 0.10 cm(-1), E/D = 0.25, A(x) = 120.0, A(y) = 120.0, A(z) = 240.0 x 10(-4) cm(-1) requires a SOPHE grid size of N = 400 (to produce a good signal to noise ratio) and takes 229.47 s. In contrast the use of either the SOPHE interpolation scheme or the mosaic misorientation linewidth model requires a SOPHE grid size of only N = 18 and takes 44.08 and 0.79 s, respectively. Results from Sophe are transferred via the Common Object Request Broker Architecture (CORBA) to XSophe and subsequently to XeprView((R)) where the simulated CW EPR spectra (1D and 2D) can be compared to the experimental spectra. Energy level diagrams, transition roadmaps and transition surfaces aid the interpretation of complicated randomly oriented CW EPR spectra and can be viewed with a web browser and an OpenInventor scene graph viewer.
Resumo:
GUIsurfer: A Reverse Engineering Framework for User Interface Software