918 resultados para Open Source Software


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In these last years, systems engineering has became one of the major research domains. The complexity of systems has increased constantly and nowadays Cyber-Physical Systems (CPS) are a category of particular interest: these, are systems composed by a cyber part (computer-based algorithms) that monitor and control some physical processes. Their development and simulation are both complex due to the importance of the interaction between the cyber and the physical entities: there are a lot of models written in different languages that need to exchange information among each other. Normally people use an orchestrator that takes care of the simulation of the models and the exchange of informations. This orchestrator is developed manually and this is a tedious and long work. Our proposition is to achieve to generate the orchestrator automatically through the use of Co-Modeling, i.e. by modeling the coordination. Before achieving this ultimate goal, it is important to understand the mechanisms and de facto standards that could be used in a co-modeling framework. So, I studied the use of a technology employed for co-simulation in the industry: FMI. In order to better understand the FMI standard, I realized an automatic export, in the FMI format, of the models realized in an existing software for discrete modeling: TimeSquare. I also developed a simple physical model in the existing open source openmodelica tool. Later, I started to understand how works an orchestrator, developing a simple one: this will be useful in future to generate an orchestrator automatically.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Genome-wide association studies (GWAS) are used to discover genes underlying complex, heritable disorders for which less powerful study designs have failed in the past. The number of GWAS has skyrocketed recently with findings reported in top journals and the mainstream media. Mircorarrays are the genotype calling technology of choice in GWAS as they permit exploration of more than a million single nucleotide polymorphisms (SNPs)simultaneously. The starting point for the statistical analyses used by GWAS, to determine association between loci and disease, are genotype calls (AA, AB, or BB). However, the raw data, microarray probe intensities, are heavily processed before arriving at these calls. Various sophisticated statistical procedures have been proposed for transforming raw data into genotype calls. We find that variability in microarray output quality across different SNPs, different arrays, and different sample batches has substantial inuence on the accuracy of genotype calls made by existing algorithms. Failure to account for these sources of variability, GWAS run the risk of adversely affecting the quality of reported findings. In this paper we present solutions based on a multi-level mixed model. Software implementation of the method described in this paper is available as free and open source code in the crlmm R/BioConductor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die erzielbare Fördergeschwindigkeit bei Vibrationsförderern hängt maßgeblich von der Bewegungsfunktion des Förderorganes ab. Für die gezielte Simulation dieser Anlagen mittels der diskreten Elemente Methode (DEM) ist es notwendig die geometrisch vernetzen Förderorgannachbildungen mit praxisrelevanten Bewegungsfunktionen zu beaufschlagen. Der Artikel beschreibt die Einbindung dieser Bewegungsfunktionen in die quellenoffene DEM-Software LIGGGHTS. Während des Simulationsprozesses wird eine Bewegung vernetzter CAD-Modelle durch trigonometrische Reihen ermöglicht.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Detecting bugs as early as possible plays an important role in ensuring software quality before shipping. We argue that mining previous bug fixes can produce good knowledge about why bugs happen and how they are fixed. In this paper, we mine the change history of 717 open source projects to extract bug-fix patterns. We also manually inspect many of the bugs we found to get insights into the contexts and reasons behind those bugs. For instance, we found out that missing null checks and missing initializations are very recurrent and we believe that they can be automatically detected and fixed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software developers are often unsure of the exact name of the method they need to use to invoke the desired behavior in a given context. This results in a process of searching for the correct method name in documentation, which can be lengthy and distracting to the developer. We can decrease the method search time by enhancing the documentation of a class with the most frequently used methods. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem - written in the same language and sharing dependencies. We implemented a proof of concept of the approach for Pharo Smalltalk and Java. In Pharo Smalltalk, methods are commonly searched for using a code browser tool called "Nautilus", and in Java using a web browser displaying HTML based documentation - Javadoc. We developed plugins for both browsers and gathered method usage data from open source projects, in order to increase developer productivity by reducing method search time. A small initial evaluation has been conducted showing promising results in improving developer productivity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When firms contribute to open source projects, they in fact invest into public goods which may be used by everyone, even by their competitors. This seemingly paradoxical behavior can be explained by the model of private-collective innovation where private investors participate in collective action. Previous literature has shown that companies benefit through the production process providing them with unique incentives such as learning and reputation effects. By contributing to open source projects firms are able to build a network of external individuals and organizations participating in the creation and development of the software. As will be shown in this doctoral dissertation firm-sponsored communities involve the formation of interorganizational relationships which eventually may lead to a source of sustained competitive advantage. However, managing a largely independent open source community is a challenging balancing act between exertion of control to appropriate value creation, and openness in order to gain and preserve credibility and motivate external contributions. Therefore, this dissertation consisting of an introductory chapter and three separate research papers analyzes characteristics of firm-driven open source communities, finds reasons why and mechanisms by which companies facilitate the creation of such networks, and shows how firms can benefit most from their communities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este proyecto es continuación de proyectos de crítica genética que se llevaron a cabo, o están en marcha en la Secretaría de Investigación de la Facultad de Humanidades de la UNaM, que tienen como objeto manuscritos de la literatura provincial. La labor de este proyecto implica una red de acuerdos teóricos, críticos y metodológicos iniciales, un rastreo e identificación de documentos en la región y la tramitación de préstamos ante poseedores actuales de los manuscritos a la que se suma lo interdisciplinario con el diálogo entre la crítica genética y la ciencia de la computación. A la luz de este diálogo el proyecto se propone en esta primera etapa promover tres acciones: a) desarrollar un sitio virtual-institucional que facilite el acceso en línea a archivos de escritores regionales que se vienen estudiando en la UNaM. b) hacer un relevamiento de los archivos de manuscritos que en la actualidad se encuentran diseminados, invisibles a las investigaciones para, en ese gesto, recuperarlos e incentivar su estudio. c) diseñar y construir una base de datos y un repositorio digital de manuscritos, utilizando para esta tarea software Open Source. d) sentar las bases para un estudio sobre la factibilidad de implementar un proceso de Text Mining que automatice la recuperación de información relevante, categorice los documentos y los agrupe de acuerdo a características comunes. e) Afianzar lazos institucionales con otros proyectos existentes en Argentina (UNLP), Francia (CRLA-Archivos), Bélgica (UCLovaina), España ( Universidad de Castilla La Mancha) y con UNNE y la UNLa con quien ya tenemos un convenio de colaboración en Minería de datos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este proyecto es continuación de proyectos de crítica genética que se llevaron a cabo, o están en marcha en la Secretaría de Investigación de la Facultad de Humanidades de la UNaM, que tienen como objeto manuscritos de la literatura provincial. La labor de este proyecto implica una red de acuerdos teóricos, críticos y metodológicos iniciales, un rastreo e identificación de documentos en la región y la tramitación de préstamos ante poseedores actuales de los manuscritos a la que se suma lo interdisciplinario con el diálogo entre la crítica genética y la ciencia de la computación. El proyecto se propone en esta primera etapa promover tres acciones: a) desarrollar un sitio virtual -institucional que facilite el acceso en línea a archivos de escritores regionales que se vienen estudiando en la UNaM. b) hacer un relevamiento de los archivos de manuscritos que en la actualidad se encuentran diseminados, invisibles a las investigaciones para, en ese gesto, recuperarlos e incentivar su estudio. c) diseñar y construir una base de datos y un repositorio digital de manuscritos, utilizando para esta tarea software Open Source. d) sentar las bases para un estudio sobre la factibilidad de implementar un proceso de Text Mining que automatice la recuperación de información relevante, categorice los documentos y los agrupe de acuerdo a características comunes. e) Afianzar lazos institucionales con otros proyectos existentes en Argentina (UNLP) y con UNNE y la UNLa con quien ya tenemos un convenio de colaboración en Minería de datos, con Francia (CRLA-Archivos), Bélgica (UCLovaina), España (Universidad de Castilla La Mancha).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

These data are provided to allow users for reproducibility of an open source tool entitled 'automated Accumulation Threshold computation and RIparian Corridor delineation (ATRIC)'

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El objetivo del proyecto es diseñar e implantar en un servidor de la ETSITGC una Infraestructura de Datos Espaciales (IDE) que proporcione un espacio único de acceso a información Geoespacial asociada al Proyecto de Cooperación al Desarrollo “Comunidades Rurales del Milenio”, de tal forma que los diferentes investigadores puedan acceder públicamente, compartiendo recursos de información geográfica y técnica asociados al Programa. Esta IDE conforme a las especificaciones del OGC será creada exclusivamente mediante herramientas “open source”, con capacidad de ser actualizada y ampliada posteriormente sin necesidad de disponer de ningún software “propietario”. Asimismo, se incorpora una un amplio volumen de recursos de información de Nicaragua como avance del banco de datos asociado al Proyecto, metadatados según la norma actual ISO 19115 (Perfil NEM v1.1).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La publicación y el uso compartido de los datos geográficos obtenidos y compartidos por usuarios nóveles o neófitos (Neo Geografía) bajo estándares Geográficos (OGC, ISO TC211) suponen una barrera infranqueable: necesidad de infraestructura (hardware, software y comunicaciones). Algunas iniciativas como OpenStreetMap (OSM), ikiMap, Tinymap, TargetMap o GeoNode hacen posible que cualquier usuario de un modo voluntario pueda compartir sus datos. Cada una de ellas adolece de ciertas limitaciones que les impide cubrir el espectro de necesidades y escalar cuando el número de usuarios del sistema crece. En este documento se presenta una solución en desarrollo, basada en Open Source, que trata de solventar algunas de las limitaciones detectadas, concretamente la escalabilidad de las soluciones basadas en GeoServer y MapServer, así como otras propias de la variedad de formatos de datos que se pueden compartir.