899 resultados para ree software environment for statistical computing and graphics R


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Trepca Pb-Zn-Ag skarn deposit (29 Mt of ore at 3.45% Pb, 2.30% Zn, and 80 g/t Ag) is located in the Kopaonik block of the western Vardar zone, Kosovo. The mineralization, hosted by recrystallized limestone of Upper Triassic age, was structurally and lithologically controlled. Ore deposition is spatially and temporally related with the postcollisional magmatism of Oligocene age (23-26 Ma). The deposit was formed during two distinct mineralization stages: an early prograde closed-system and a later retrograde open-system stage. The prograde mineralization consisting mainly of pyroxenes (Hd(54-100)Jo(0-45)Di(0-45)) resulted from the interaction of magmatic fluids associated with Oligocene (23-26 Ma) postcollisional magmatism. Whereas there is no direct contact between magmatic rocks and the mineralization, the deposit is classified as a distal Pb-Zn-Ag skarn. Abundant pyroxene reflects low oxygen fugacity (<10(-31) bar) and anhydrous environment. Fluid inclusion data and mineral assemblage limit the prograde stage within a temperature range between 390 degrees and 475 degrees C. Formation pressure is estimated below 900 bars. Isotopic composition of aqueous fluid, inclusions hosted by hedenbergite (delta D = -108 to -130 parts per thousand; delta O-18 = 7.5-8.0 parts per thousand), Mn-enriched mineralogy and high REE content of the host carbonates at the contact with the skarn mineralization suggest that a magmatic fluid was modified during its infiltration through the country rocks. The retrograde mineral assemblage comprises ilvaite, magnetite, arsenopyrite, pyrrhotite, marcasite, pyrite, quartz, and various carbonates. Increases in oxygen and sulfur fugacities, as well as a hydrous character of mineralization, require an open-system model. The opening of the system is related to phreatomagmatic explosion and formation of the breccia. Arsenopyrite geothermometer limits the retrograde stage within the temperature range between 350 degrees and 380 degrees C and sulfur fugacity between 10(-8.8) and 10(-7.2) bars. The principal ore minerals, galena, sphalerite, pyrite, and minor chalcopyrite, were deposited from a moderately saline Ca-Na chloride fluid at around 350 degrees C. According to the isotopic composition of fluid inclusions hosted by sphalerite (delta D = -55 to -74 parts per thousand; delta O-18 = -9.6 to -13.6 parts per thousand), the fluid responsible for ore deposition was dominantly meteoric in origin. The delta S-31 values of the sulfides spanning between -5.5 and +10 parts per thousand point to a magmatic origin of sulfur. Ore deposition appears to have been largely contemporaneous with the retrograde stage of the skarn development. Postore stage accompanied the precipitation of significant amount of carbonates including the travertine deposits at the deposit surface. Mineralogical composition of travertine varies from calcite to siderite and all carbonates contain significant amounts of Mn. Decreased formation temperature and depletion in the REE content point to an influence of pH-neutralized cold ground water and dying magmatic system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Centrifuge is a user-friendly system to simultaneously access Arabidopsis gene annotations and intra- and inter-organism sequence comparison data. The tool allows rapid retrieval of user-selected data for each annotated Arabidopsis gene providing, in any combination, data on the following features: predicted protein properties such as mass, pI, cellular location and transmembrane domains; SWISS-PROT annotations; Interpro domains; Gene Ontology records; verified transcription; BLAST matches to the proteomes of A.thaliana, Oryza sativa (rice), Caenorhabditis elegans, Drosophila melanogaster and Homo sapiens. The tool lends itself particularly well to the rapid analysis of contigs or of tens or hundreds of genes identified by high-throughput gene expression experiments. In these cases, a summary table of principal predicted protein features for all genes is given followed by more detailed reports for each individual gene. Centrifuge can also be used for single gene analysis or in a word search mode. AVAILABILITY: http://centrifuge.unil.ch/ CONTACT: edward.farmer@unil.ch.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT : A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner. The first essay reviews research in social network theory and knowledge transfer management, and identifies the crucial factors of innovation network configuration for a firm's learning performance or innovation output. The findings suggest that network structure, network relationship, and network position all impact on a firm's performance. Although the previous literature indicates that there are disagreements about the impact of dense or spare structure, as well as strong or weak ties, case evidence from Chinese software companies reveals that dense and strong connections with partners are positively associated with firms' performance. The second essay is a theoretical essay that illustrates the limitations of social network theory for explaining the source of network value and offers a new theoretical model that applies resource-based view to network environments. It suggests that network configurations, such as network structure, network relationship and network position, can be considered important network resources. In addition, this essay introduces the concept of network capability, and suggests that four types of network capabilities play an important role in unlocking the potential value of network resources and determining the distribution of network rents between partners. This essay also highlights the contingent effects of network capability on a firm's innovation output, and explains how the different impacts of network capability depend on a firm's strategic choices. This new theoretical model has been pre-tested with a case study of China software industry, which enhances the internal validity of this theory. The third essay addresses the questions of what impact network capability has on firm innovation performance and what are the antecedent factors of network capability. This essay employs a structural equation modelling methodology that uses a sample of 211 Chinese Hi-tech firms. It develops a measurement of network capability and reveals that networked firms deal with cooperation between, and coordination with partners on different levels according to their levels of network capability. The empirical results also suggests that IT maturity, the openness of culture, management system involved, and experience with network activities are antecedents of network capabilities. Furthermore, the two-group analysis of the role of international partner(s) shows that when there is a culture and norm gap between foreign partners, a firm must mobilize more resources and effort to improve its performance with respect to its innovation network. The fourth essay addresses the way in which network capabilities influence firm innovation performance. By using hierarchical multiple regression with data from Chinese Hi-tech firms, the findings suggest that there is a significant partial mediating effect of knowledge transfer on the relationships between network capabilities and innovation performance. The findings also reveal that the impacts of network capabilities divert with the environment and strategic decision the firm has made: exploration or exploitation. Network constructing capability provides a greater positive impact on and yields more contributions to innovation performance than does network operating capability in an exploration network. Network operating capability is more important than network constructing capability for innovative firms in an exploitation network. Therefore, these findings highlight that the firm can shape the innovation network proactively for better benefits, but when it does so, it should adjust its focus and change its efforts in accordance with its innovation purposes or strategic orientation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The accurate estimation of total energy expenditure (TEE) is essential to allow the provision of nutritional requirements in patients treated by maintenance hemodialysis (MHD). The measurement of TEE and resting energy expenditure (REE) by direct or indirect calorimetry and doubly labeled water are complicated, timeconsuming and cumbersome in this population. Recently, a new system called SenseWear® armband (SWA) was developed to assess TEE, physical activity and REE. This device works by measurements of body acceleration in two axes, heat production and steps counts. REE measured by indirect calorimetry and SWA are well correlated. The aim of this study was to determine TEE, physical activity and REE on patients on MHD using this new device. Methods and materials: Daily TEE, REE, step count, activity time, intensity of activity and lying time were determined for 7 consecutive days in unselected stable patients on MHD and sex, age and weightmatched healthy controls (HC). Patients with malnutrition, cancer, use of immunosuppressive drugs, hypoalbumemia <35 g/L and those hospitalized in the last 3 months, were excluded. For MHD patients, separate analyses were conducted in dialysis and non-dialysis days. Relevant parameters known to affect REE, such as BMI, albumin, pre-albumin, hemoglobin, Kt/V, CRP, bicarbonate, PTH, TSH, were recorded. Results: Thirty patients on MHD and 30 HC were included. In MHD patients, there were 20 men and 10 women. Age was 60,13 years ± 14.97 (mean ± SD), BMI was 25.77 kg/m² ± 4.73 and body weight was 74.65 kg ± 16.16. There were no significant differences between the two groups. TEE was lower in MHD patients compared to HC (28.79 ± 5.51 SD versus 32.91 ± 5.75 SD kcal/kg/day; p <0.01). Activity time was significantly lower in patients on MHD (101.3 ± 12.6SD versus 50.7 ± 9.4 SD min; p = 0.0021). Energy expenditure during the time of activity was significantly lower in MHD patients. MHD patients walked 4543 ± 643 SD vs 8537 ± 744 SD steps per day (p <0.0001). Age was negatively correlated with TEE (r = -0.70) and intensity of activity (r = -0.61) in HC, but not in patients on MHD. TEE showed no difference between dialysis and non-dialysis days (29.92 ± 2.03 SD versus 28.44 ± 1.90 SD kcal/kg/day; p = NS), reflecting a lack of difference in activity (number of steps, time of physical activity) and REE. This finding was observed in MHD patients both older and younger than 60 years. However, age stratification appeared to have an influence on TEE, regardless of dialysis day, (29.92 ± 2.07 SD kcal/kg/day for <60 years-old versus 27.41 ± 1.04 SD kcal/kg/day for ≥60 years old), although failing to reach statistical significance. Conclusion: Using SWA, we have shown that stable patients on MHD have a lower TEE than matched HC. On average, a TEE of 28.79 kcal/kg/day, partially affected by age, was measured. This finding gives support to the clinical impression that it is difficult and probably unnecessary to provide an energy amount of 30-35 kcal/kg/day, as proposed by international guidelines for this population. In addition, we documented for the first time that MHD patients exert a reduced physical activity as compared to HC. There were surprisingly no differences in TEE, REE and physical activity parameters between dialysis and non-dialysis days. This observation might be due to the fact that patients on MHD produce a physical effort to reach the dialysis centre. Age per se did not influence physical activity in MHD patients, contrary to HC, reflecting the impact of co-morbidities on physical activity in this group of patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interdependence is the main feature of dyadic relationships and, in recent years, various statistical procedures have been proposed for quantifying and testing this social attribute in different dyadic designs. The purpose of this paper is to develop several functions for this kind of statistical tests in an R package, known as nonindependence, for use by applied social researchers. A Graphical User Interface (GUI) is also developed to facilitate the use of the functions included in this package. Examples drawn from psychological research and simulated data are used to illustrate how the software works.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Free Open Source Software (FOSS) seem far from the military field but in some cases, some technologies normally used for civilian purposes may have military applications. These products and technologies are called dual-use. Can we manage to combine FOSS and dual-use products? On one hand, we have to admit that this kind of association exists - dual-use software can be FOSS and many examples demonstrate this duality - but on the other hand, dual-use software available under free licenses lead us to ask many questions. For example, the dual-use export control laws aimed at stemming the proliferation of weapons of mass destruction. Dual-use export in United States (ITAR) and Europe (regulation 428/2009) implies as a consequence the prohibition or regulation of software exportation, involving the closing of source code. Therefore, the issues of exported softwares released under free licenses arises. If software are dual-use goods and serve for military purposes, they may represent a danger. By the rights granted to licenses to run, study, redistribute and distribute modified versions of the software, anyone can access the free dual-use software. So, the licenses themselves are not at the origin of the risk, it is actually linked to the facilitated access to source codes. Seen from this point of view, it goes against the dual-use regulation which allows states to control these technologies exportation. For this analysis, we will discuss about various legal questions and draft answers from either licenses or public policies in this respect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general objective of this study was to conduct astatistical analysis on the variation of the weld profiles and their influence on the fatigue strength of the joint. Weld quality with respect to its fatigue strength is of importance which is the main concept behind this thesis. The intention of this study was to establish the influence of weld geometric parameters on the weld quality and fatigue strength. The effect of local geometrical variations of non-load carrying cruciform fillet welded joint under tensile loading wasstudied in this thesis work. Linear Elastic Fracture Mechanics was used to calculate fatigue strength of the cruciform fillet welded joints in as-welded condition and under cyclic tensile loading, for a range of weld geometries. With extreme value statistical analysis and LEFM, an attempt was made to relate the variation of the cruciform weld profiles such as weld angle and weld toe radius to respective FAT classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a programming environment for supporting learning in STEM, particularly mobile robotic learning. It was designed to maintain progressive learning for people with and without previous knowledge of programming and/or robotics. The environment was multi platform and built with open source tools. Perception, mobility, communication, navigation and collaborative behaviour functionalities can be programmed for different mobile robots. A learner is able to programme robots using different programming languages and editor interfaces: graphic programming interface (basic level), XML-based meta language (intermediate level) or ANSI C language (advanced level). The environment supports programme translation transparently into different languages for learners or explicitly on learners’ demand. Learners can access proposed challenges and learning interfaces by examples. The environment was designed to allow characteristics such as extensibility, adaptive interfaces, persistence and low software/hardware coupling. Functionality tests were performed to prove programming environment specifications. UV BOT mobile robots were used in these tests

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opinnäytetyö etsii korrelaatiota ohjelmistomittauksella saavutettujen tulosten ja ohjelmasta löytyneiden virheiden väliltä. Työssä käytetään koeryhmänä jo olemassaolevia ohjelmistoja. Työ tutkii olisiko ohjelmistomittareita käyttämällä ollut mahdollista paikallistaa ohjelmistojen ongelmakohdat ja näin saada arvokasta tietoa ohjelmistokehitykseen. Mittausta voitaisiin käyttää resurssien parempaan kohdentamiseen koodikatselmuksissa, koodi-integraatiossa, systeemitestauksessa ja aikataulutuksessa. Mittaamisen avulla nämä tehtävät saisivat enemmän tietoa resurssien kohdistamiseen. Koeryhmänä käytetään erilaisia ohjelmistotuotteita. Yhteistä näille kaikille tuotteille on niiden peräkkäiset julkaisut. Uutta julkaisua tehtäessä, edellistä julkaisua käytetään pohjana, jonka päällekehitetään uutta lähdekoodia. Tämän takia ohjelmistomittauksessa pitää pystyä erottelemaan edellisen julkaisun lähdekoodi uudesta lähdekoodista. Työssä käytettävät ohjelmistomittarit ovat yleisiä ja ohjelmistotekniikassalaajasti käytettyjä mittaamaan erilaisia lähdekoodin ominaisuuksia, joiden arvellaan vaikuttavan virhealttiuteen. Tämän työn tarkoitus on tutkia näiden ohjelmistomittareiden käytettävyyttä koeryhmänä toimivissa ohjelmistoympäristöissä. Käytännön osuus työstä onnistui löytämään korrelaation joidenkinohjelmistomittareiden ja virheiden väliltä, samalla kuin toiset ohjelmistomittarit eivät antaneet vakuuttavia tuloksia. Ohjelmistomittareita käyttämällä näyttää olevan mahdollista tunnistaa virhealttiit kohdat ohjelmasta ja siten parantaa ohjelmistokehityksen tehokkuutta. Ohjelmistomittareiden käyttö tuotekehityksessäon perusteltavaa ja niiden avulla mahdollisesti pystyttäisiin vaikuttamaan ohjelmiston laatuun tulevissa julkaisuissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Blood pressure (BP) is known to aggregate in families. Yet, heritability estimates are population-specific and no Swiss data have been published so far. We estimated the heritability of ambulatory and office BP in a Swiss population-based sample. METHODS: The Swiss Kidney Project on Genes in Hypertension is a population-based family study focusing on BP genetics. Office and ambulatory BP were measured in 1009 individuals from 271 nuclear families. Heritability was estimated for SBP, DBP, and pulse pressure using a maximum likelihood method implanted in the Statistical Analysis in Genetic Epidemiology software. RESULTS: The 518 women and 491 men included in this analysis had a mean (±SD) age of 48.3 (±17.4) and 47.3 (±17.7) years, and a mean BMI of 23.8 (±4.2) and 25.9 (±4.1) kg/m, respectively. Narrow-sense heritability estimates (±standard error) for ambulatory SBP, DBP, and pulse pressure were 0.37 ± 0.07, 0.26 ± 0.07, and 0.29 ± 0.07 for 24-h BP; 0.39 ± 0.07, 0.28 ± 0.07, and 0.27 ± 0.07 for day BP; and 0.25 ± 0.07, 0.20 ± 0.07, and 0.30 ± 0.07 for night BP, respectively (all P < 0.001). Heritability estimates for office SBP, DBP, and pulse pressure were 0.21 ± 0.08, 0.25 ± 0.08, and 0.18 ± 0.07 (all P < 0.01). CONCLUSIONS: We found significant heritability estimates for both ambulatory and office BP in this Swiss population-based study. Our findings justify the ongoing search for the genetic determinants of BP.