37 resultados para Open Source (OS)
Resumo:
Seaside is the open source framework of choice for developing sophisticated and dynamic web applications. Seaside uses the power of objects to master the web. With Seaside web applications is as simple as building desktop applications. Seaside lets you build highly dynamic and interactive web applications. Seaside supports agile development through interactive debugging and unit testing. Seaside is based on Smalltalk, a proven and robust language implemented by different vendors. Seaside is now available for all the major Smalltalk including Pharo, Squeak, GNU Smalltalk, Cincom Smalltalk, GemStone Smalltalk, and VA Smalltalk.
Resumo:
Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
We recently reported on the Multi Wave Animator (MWA), a novel open-source tool with capability of recreating continuous physiologic signals from archived numerical data and presenting them as they appeared on the patient monitor. In this report, we demonstrate for the first time the power of this technology in a real clinical case, an intraoperative cardiopulmonary arrest following reperfusion of a liver transplant graft. Using the MWA, we animated hemodynamic and ventilator data acquired before, during, and after cardiac arrest and resuscitation. This report is accompanied by an online video that shows the most critical phases of the cardiac arrest and resuscitation and provides a basis for analysis and discussion. This video is extracted from a 33-min, uninterrupted video of cardiac arrest and resuscitation, which is available online. The unique strength of MWA, its capability to accurately present discrete and continuous data in a format familiar to clinicians, allowed us this rare glimpse into events leading to an intraoperative cardiac arrest. Because of the ability to recreate and replay clinical events, this tool should be of great interest to medical educators, researchers, and clinicians involved in quality assurance and patient safety.
Resumo:
BACKGROUND: Despite recent algorithmic and conceptual progress, the stoichiometric network analysis of large metabolic models remains a computationally challenging problem. RESULTS: SNA is a interactive, high performance toolbox for analysing the possible steady state behaviour of metabolic networks by computing the generating and elementary vectors of their flux and conversions cones. It also supports analysing the steady states by linear programming. The toolbox is implemented mainly in Mathematica and returns numerically exact results. It is available under an open source license from: http://bioinformatics.org/project/?group_id=546. CONCLUSION: Thanks to its performance and modular design, SNA is demonstrably useful in analysing genome scale metabolic networks. Further, the integration into Mathematica provides a very flexible environment for the subsequent analysis and interpretation of the results.
Resumo:
Recovering the architecture is the first step towards reengineering a software system. Many reverse engineering tools use top-down exploration as a way of providing a visual and interactive process for architecture recovery. During the exploration process, the user navigates through various views on the system by choosing from several exploration operations. Although some sequences of these operations lead to views which, from the architectural point of view, are mode relevant than others, current tools do not provide a way of predicting which exploration paths are worth taking and which are not. In this article we propose a set of package patterns which are used for augmenting the exploration process with in formation about the worthiness of the various exploration paths. The patterns are defined based on the internal package structure and on the relationships between the package and the other packages in the system. To validate our approach, we verify the relevance of the proposed patterns for real-world systems by analyzing their frequency of occurrence in six open-source software projects.
Resumo:
Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.
Resumo:
BACKGROUND: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e.g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. RESULTS: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/~vpopovic/research/ CONCLUSION: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.
Resumo:
The usage of social media in leisure time settings has become a prominent research topic. However, less research has been done on the design of social media in collaboration settings. In this study, we investigate how social media can support asynchronous collaboration in virtual teams and specifically how they can increase activity awareness. On the basis of an open source social networking platform, we present two prototype designs: a standard platform with basic support for information processing, communication and process – as suggested by Zigurs and Buckland (1998) – and an advanced platform with additional support for activity awareness via specialfeed functions. We argue that the standard platform already conveys activity awareness to a certain extent, however, that this awareness can be increased even more by the feeds in the advanced platform. Both prototypes are tested in a field experiment and evaluated with respect to their impact on perceived activity awareness, coordination and satisfaction. We show that the advanced design increases coordination and satisfaction through increased perceived activity awareness.
Resumo:
The intention of an authentication and authorization infrastructure (AAI) is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO) across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP). Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.
Resumo:
Der Einsatz von Open Source Software kann das IT-Budget schonen, wenn man richtig vorgeht. Viel wichtiger sind aber strategische Vorteile wie die digitale Nachhaltigkeit oder die Unabhängigkeit von Herstellern, die sich durch den konsequenten Einsatz von Open Source ergeben.
Resumo:
Konkurrenz belebt das Geschäft? Das ist ein alter Hut. Aber es gibt längst einen neuen Trend zum Erfolg. Am CERN machen sie’s, am MIT machen sie’s, im FabLab, bei Hackathons – Open Source ist die Grundlage für viele der innovativsten Forschungsprojekte, für viele lukrative Geschäftsmodelle. Und dabei ist das Konzept uralt. Es heißt: Zusammenarbeit.
Resumo:
In this paper, we describe agent-based content retrieval for opportunistic networks, where requesters can delegate content retrieval to agents, which retrieve the content on their behalf. The approach has been implemented in CCNx, the open source CCN framework, and evaluated on Android smart phones. Evaluations have shown that the overhead of agent delegation is only noticeable for very small content. For content larger than 4MB, agent-based content retrieval can even result in a throughput increase of 20% compared to standard CCN download applications. The requester asks every probe interval for agents that have retrieved the desired content. Evaluations have shown that a probe interval of 30s delivers the best overall performance in our scenario because the number of transmitted notification messages can be decreased by up to 80% without significantly increasing the download time.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
Is numerical mimicry a third way of establishing truth? Kevin Heng received his M.S. and Ph.D. in astrophysics from the Joint Institute for Laboratory Astrophysics (JILA) and the University of Colorado at Boulder. He joined the Institute for Advanced Study in Princeton from 2007 to 2010, first as a Member and later as the Frank & Peggy Taplin Member. From 2010 to 2012 he was a Zwicky Prize Fellow at ETH Z¨urich (the Swiss Federal Institute of Technology). In 2013, he joined the Center for Space and Habitability (CSH) at the University of Bern, Switzerland, as a tenure-track assistant professor, where he leads the Exoplanets and Exoclimes Group. He has worked on, and maintains, a broad range of interests in astrophysics: shocks, extrasolar asteroid belts, planet formation, fluid dynamics, brown dwarfs and exoplanets. He coordinates the Exoclimes Simulation Platform (ESP), an open-source set of theoretical tools designed for studying the basic physics and chemistry of exoplanetary atmospheres and climates (www.exoclime.org). He is involved in the CHEOPS (Characterizing Exoplanet Satellite) space telescope, a mission approved by the European Space Agency (ESA) and led by Switzerland. He spends a fair amount of time humbly learning the lessons gleaned from studying the Earth and Solar System planets, as related to him by atmospheric, climate and planetary scientists. He received a Sigma Xi Grant-in-Aid of Research in 2006
Resumo:
Software developers often ask questions about software systems and software ecosystems that entail exploration and navigation, such as who uses this component?, and where is this feature implemented?. Software visualisation can be a great aid to understanding and exploring the answers to such questions, but visualisations require expertise to implement effectively, and they do not always scale well to large systems. We propose to automatically generate software visualisations based on software models derived from open source software corpora and from an analysis of the properties of typical developers queries and commonly used visualisations. The key challenges we see are (1) understanding how to match queries to suitable visualisations, and (2) scaling visualisations effectively to very large software systems and corpora. In the paper we motivate the idea of automatic software visualisation, we enumerate the challenges and our proposals to address them, and we describe some very initial results in our attempts to develop scalable visualisations of open source software corpora.