998 resultados para Software distributions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in Ciao, ISO-Prolog, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what versión of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments. One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system asseriion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, debugging, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated interactively from emacs or from the command line, in many formats including texinfo, dvi, ps, pdf, info, ascii, html/css, Unix nroff/man, Windows help, etc., and can include bibliographic citations and images, lpdoc can also genérate "man" pages (Unix man page format), nicely formatted plain ASCII "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusión in on-line Índices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what version of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc.) for the predicates in the program, and machine-readable comments. One of the main novelties of lpdoc is that these assertions and comments are written using the Ciao system assertion language, which is also the language of communication between the compiler and the user and between the components of the compiler. This allows a significant synergy among specification, documentation, optimization, etc. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The documentation can be generated in many formats including texinfo, dvi, ps, pdf, info, html/css, Unix nroff/man, Windows help, etc., and can include bibliographic citations and images. lpdoc can also generate “man” pages (Unix man page format), nicely formatted plain ascii “readme” files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or info formats suitable for inclusion in on-line indices of manuals, and even complete WWW and info sites containing on-line catalogs of documents and software distributions. The lpdoc manual, all other Ciao system manuals, and parts of this paper are generated by lpdoc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. The documentation can be generated in many formats including t e x i n f o, dvi, ps, pdf, inf o, html/css, Unix nrof f/man, Windows help, etc., and can include bibliographic citations and images, lpdoc can also genérate "man" pages (Unix man page format), nicely formatted plain ascii "readme" files, installation scripts useful when the manuals are included in software distributions, brief descriptions in html/css or inf o formats suitable for inclusión in on-line Índices of manuals, and even complete WWW and inf o sites containing on-line catalogs of documents and software distributions. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between the program and its documentation, and also identifying precisely to what versión of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text assertions (declarations with types, modes, etc. ...) for the predicates in the program, and machine-readable comments. These assertions and comments are written using the Ciao system assertion language. A simple compatibility library allows conventional (C)LP systems to ignore these assertions and comments and treat normally programs documented in this way. The lpdoc manual, all other Ciao system manuals, and most of this paper, are generated by lpdoc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this report is to describe the use of WinBUGS for two datasets that arise from typical population pharmacokinetic studies. The first dataset relates to gentamicin concentration-time data that arose as part of routine clinical care of 55 neonates. The second dataset incorporated data from 96 patients receiving enoxaparin. Both datasets were originally analyzed by using NONMEM. In the first instance, although NONMEM provided reasonable estimates of the fixed effects parameters it was unable to provide satisfactory estimates of the between-subject variance. In the second instance, the use of NONMEM resulted in the development of a successful model, albeit with limited available information on the between-subject variability of the pharmacokinetic parameters. WinBUGS was used to develop a model for both of these datasets. Model comparison for the enoxaparin dataset was performed by using the posterior distribution of the log-likelihood and a posterior predictive check. The use of WinBUGS supported the same structural models tried in NONMEM. For the gentamicin dataset a one-compartment model with intravenous infusion was developed, and the population parameters including the full between-subject variance-covariance matrix were available. Analysis of the enoxaparin dataset supported a two compartment model as superior to the one-compartment model, based on the posterior predictive check. Again, the full between-subject variance-covariance matrix parameters were available. Fully Bayesian approaches using MCMC methods, via WinBUGS, can offer added value for analysis of population pharmacokinetic data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary objective of this study was to predict the distribution of mesophotic hard corals in the Au‘au Channel in the Main Hawaiian Islands (MHI). Mesophotic hard corals are light-dependent corals adapted to the low light conditions at approximately 30 to 150 m in depth. Several physical factors potentially influence their spatial distribution, including aragonite saturation, alkalinity, pH, currents, water temperature, hard substrate availability and the availability of light at depth. Mesophotic corals and mesophotic coral ecosystems (MCEs) have increasingly been the subject of scientific study because they are being threatened by a growing number of anthropogenic stressors. They are the focus of this spatial modeling effort because the Hawaiian Islands Humpback Whale National Marine Sanctuary (HIHWNMS) is exploring the expansion of its scope—beyond the protection of the North Pacific Humpback Whale (Megaptera novaeangliae)—to include the conservation and management of these ecosystem components. The present study helps to address this need by examining the distribution of mesophotic corals in the Au‘au Channel region. This area is located between the islands of Maui, Lanai, Molokai and Kahoolawe, and includes parts of the Kealaikahiki, Alalākeiki and Kalohi Channels. It is unique, not only in terms of its geology, but also in terms of its physical oceanography and local weather patterns. Several physical conditions make it an ideal place for mesophotic hard corals, including consistently good water quality and clarity because it is flushed by tidal currents semi-diurnally; it has low amounts of rainfall and sediment run-off from the nearby land; and it is largely protected from seasonally strong wind and wave energy. Combined, these oceanographic and weather conditions create patches of comparatively warm, calm, clear waters that remain relatively stable through time. Freely available Maximum Entropy modeling software (MaxEnt 3.3.3e) was used to create four separate maps of predicted habitat suitability for: (1) all mesophotic hard corals combined, (2) Leptoseris, (3) Montipora and (4) Porites genera. MaxEnt works by analyzing the distribution of environmental variables where species are present, so it can find other areas that meet all of the same environmental constraints. Several steps (Figure 0.1) were required to produce and validate four ensemble predictive models (i.e., models with 10 replicates each). Approximately 2,000 georeferenced records containing information about mesophotic coral occurrence and 34 environmental predictors describing the seafloor’s depth, vertical structure, available light, surface temperature, currents and distance from shoreline at three spatial scales were used to train MaxEnt. Fifty percent of the 1,989 records were randomly chosen and set aside to assess each model replicate’s performance using Receiver Operating Characteristic (ROC), Area Under the Curve (AUC) values. An additional 1,646 records were also randomly chosen and set aside to independently assess the predictive accuracy of the four ensemble models. Suitability thresholds for these models (denoting where corals were predicted to be present/absent) were chosen by finding where the maximum number of correctly predicted presence and absence records intersected on each ROC curve. Permutation importance and jackknife analysis were used to quantify the contribution of each environmental variable to the four ensemble models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the optimisation of Coarse-Fine (CF) spectrum sensing architectures under a distribution of SNRs for Dynamic Spectrum Access (DSA). Three different detector architectures are investigated: the Coarse-Sorting Fine Detector (CSFD), the Coarse-Deciding Fine Detector (CDFD) and the Hybrid Coarse-Fine Detector (HCFD). To date, the majority of the work on coarse-fine spectrum sensing for cognitive radio has focused on a single value for the SNR. This approach overlooks the key advantage that CF sensing has to offer, namely that high powered signals can be easily detected without extra signal processing. By considering a range of SNR values, the detector can be optimised more effectively and greater performance gains realised. This work considers the optimisation of CF spectrum sensing schemes where the security and performance are treated separately. Instead of optimising system performance at a single, constant, low SNR value, the system instead is optimised for the average operating conditions. The security is still provided such that at the low SNR values the safety specifications are met. By decoupling the security and performance, the system’s average performance increases whilst maintaining the protection of licensed users from harmful interference. The different architectures considered in this thesis are investigated in theory, simulation and physical implementation to provide a complete overview of the performance of each system. This thesis provides a method for estimating SNR distributions which is quick, accurate and relatively low cost. The CSFD is modelled and the characteristic equations are found for the CDFD scheme. The HCFD is introduced and optimisation schemes for all three architectures are proposed. Finally, using the Implementing Radio In Software (IRIS) test-bed to confirm simulation results, CF spectrum sensing is shown to be significantly quicker than naive methods, whilst still meeting the required interference probability rates and not requiring substantial receiver complexity increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coxian phase-type distributions are becoming a popular means of representing survival times within a health care environment. They are favoured as they show a distribution as a system of phases and can allow for an easy visual representation of the rate of flow of patients through a system. Difficulties arise, however, in determining the parameter estimates of the Coxian phase-type distribution. This paper examines ways of making the fitting of the Coxian phase-type distribution less cumbersome by outlining different software packages and algorithms available to perform the fit and assessing their capabilities through a number of performance measures. The performance measures rate each of the methods and help in identifying the more efficient. Conclusions drawn from these performance measures suggest SAS to be the most robust package. It has a high rate of convergence in each of the four example model fits considered, short computational times, detailed output, convergence criteria options, along with a succinct ability to switch between different algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software metrics offer us the promise of distilling useful information from vast amounts of software in order to track development progress, to gain insights into the nature of the software, and to identify potential problems. Unfortunately, however, many software metrics exhibit highly skewed, non-Gaussian distributions. As a consequence, usual ways of interpreting these metrics --- for example, in terms of "average" values --- can be highly misleading. Many metrics, it turns out, are distributed like wealth --- with high concentrations of values in selected locations. We propose to analyze software metrics using the Gini coefficient, a higher-order statistic widely used in economics to study the distribution of wealth. Our approach allows us not only to observe changes in software systems efficiently, but also to assess project risks and monitor the development process itself. We apply the Gini coefficient to numerous metrics over a range of software projects, and we show that many metrics not only display remarkably high Gini values, but that these values are remarkably consistent as a project evolves over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The XSophe computer simulation software suite consisting of a daemon, the XSophe interface and the computational program Sophe is a state of the art package for the simulation of electron paramagnetic resonance spectra. The Sophe program performs the computer simulation and includes a number of new technologies including; the SOPHE partition and interpolation schemes, a field segmentation algorithm, homotopy, parallelisation and spectral optimisation. The SOPHE partition and interpolation scheme along with a field segmentation algorithm greatly increases the speed of simulations for most systems. Multidimensional homotopy provides an efficient method for accurately tracing energy levels and hence tracing transitions in the presence of energy level anticrossings and looping transitions and allowing computer simulations in frequency space. Recent enhancements to Sophe include the generalised treatment of distributions of orientational parameters, termed the mosaic misorientation linewidth model and a faster more efficient algorithm for the calculation of resonant field positions and transition probabilities. For complex systems the parallelisation enables the simulation of these systems on a parallel computer and the optimisation algorithms in the suite provide the experimentalist with the possibility of finding the spin Hamiltonian parameters in a systematic manner rather than a trial-and-error process. The XSophe software suite has been used to simulate multifrequency EPR spectra (200 MHz to 6 00 GHz) from isolated spin systems (S > ~½) and coupled centres (Si, Sj _> I/2). Griffin, M.; Muys, A.; Noble, C.; Wang, D.; Eldershaw, C.; Gates, K.E.; Burrage, K.; Hanson, G.R."XSophe, a Computer Simulation Software Suite for the Analysis of Electron Paramagnetic Resonance Spectra", 1999, Mol. Phys. Rep., 26, 60-84.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study some of the characteristics of the art painting image color semantics. We analyze the color features of differ- ent artists and art movements. The analysis includes exploration of hue, saturation and luminance. We also use quartile’s analysis to obtain the dis- tribution of the dispersion of defined groups of paintings and measure the degree of purity for these groups. A special software system “Art Paint- ing Image Color Semantics” (APICSS) for image analysis and retrieval was created. The obtained result can be used for automatic classification of art paintings in image retrieval systems, where the indexing is based on color characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research proposes a reflection on tutorial videos from Youtube, seen as a form of gift in modern society. Our reflection parts form a perspective of mutual exchange, which avoids the patterns of trade with current economic purposes. We present these video producers as craftsmen of cyberculture due to the skill and competence which they transmit their knowledge. The research is consisted by the observation of video tutorials on YouTube over the Linux operating system and its distributions. Analyzing the interactions between video producers, users and the website. The analysis is based on the classic Mauss (2003) and his reinterpretations of Caille (1998, 2001, 2002, 2006), Godbout (1992, 1998) assisted by Aime Cossetta (2010) and Sennett (2009) to help understand the idea of the craftsmen. The Internet as an open territory in expansion ables us to understand that the relationship in this medium also constitutes the reciprocal links pointed out by Mauss in the early twentieth century. The circulation of intangible property, in this case the knowledge beyond the establishment of social links, promotes a collaborative extent to produce the common in cyberspace.