15 resultados para gzip


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gzip无损压缩算法.尽管gzip算法能够取得很好的压缩比,但它在分析和压缩编码的过程需要进行大量的计算.为了缩短压缩时间,提出了一种基于共享存储的并行压缩策略,采用OpenMP标准和"生产者/消费者"模型实现了gzip的并行压缩版本.在Beowulf集群中的一个SMP节点(双CPU)和曙光天阔服务器(4路双核)上的测试表明,并行化的gzip程序取得了极大的性能提升,尤其是大文件的压缩.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nucleic acids are a useful substrate for engineering at the molecular level. Designing the detailed energetics and kinetics of interactions between nucleic acid strands remains a challenge. Building on previous algorithms to characterize the ensemble of dilute solutions of nucleic acids, we present a design algorithm that allows optimization of structural features and binding energetics of a test tube of interacting nucleic acid strands. We extend this formulation to handle multiple thermodynamic states and combinatorial constraints to allow optimization of pathways of interacting nucleic acids. In both design strategies, low-cost estimates to thermodynamic properties are calculated using hierarchical ensemble decomposition and test tube ensemble focusing. These algorithms are tested on randomized test sets and on example pathways drawn from the molecular programming literature. To analyze the kinetic properties of designed sequences, we describe algorithms to identify dominant species and kinetic rates using coarse-graining at the scale of a small box containing several strands or a large box containing a dilute solution of strands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Security and Privacy Online - this is the resource and flyer created for INFO2009 which provides an interactive web presentation to make the general public aware of the dangers of using the internet unsafely, and how they can protect themselves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The aim of this study was to develop a child-specific classification system for long bone fractures and to examine its reliability and validity on the basis of a prospective multicentre study. METHODS: Using the sequentially developed classification system, three samples of between 30 and 185 paediatric limb fractures from a pool of 2308 fractures documented in two multicenter studies were analysed in a blinded fashion by eight orthopaedic surgeons, on a total of 5 occasions. Intra- and interobserver reliability and accuracy were calculated. RESULTS: The reliability improved with successive simplification of the classification. The final version resulted in an overall interobserver agreement of κ = 0.71 with no significant difference between experienced and less experienced raters. CONCLUSIONS: In conclusion, the evaluation of the newly proposed classification system resulted in a reliable and routinely applicable system, for which training in its proper use may further improve the reliability. It can be recommended as a useful tool for clinical practice and offers the option for developing treatment recommendations and outcome predictions in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Systematic reviews and meta-analyses of randomized trials that include patient-reported outcomes (PROs) often provide crucial information for patients, clinicians and policy-makers facing challenging health care decisions. Based on emerging methods, guidance on improving the interpretability of meta-analysis of patient-reported outcomes, typically continuous in nature, is likely to enhance decision-making. The objective of this paper is to summarize approaches to enhancing the interpretability of pooled estimates of PROs in meta-analyses. When differences in PROs between groups are statistically significant, decision-makers must be able to interpret the magnitude of effect. This is challenging when, as is often the case, clinical trial investigators use different measurement instruments for the same construct within and between individual randomized trials. For such cases, in addition to pooling results as a standardized mean difference, we recommend that systematic review authors use other methods to present results such as relative (relative risk, odds ratio) or absolute (risk difference) dichotomized treatment effects, complimented by presentation in either: natural units (e.g. overall depression reduced by 2.4 points when measured on a 50-point Hamilton Rating Scale for Depression); minimal important difference units (e.g. where 1.0 unit represents the smallest difference in depression that patients, on average, perceive as important the depression score was 0.38 (95%CI 0.30 to 0.47) units less than the control group); or a ratio of means (e.g. where the mean in the treatment group is divided by the mean in the control group, the ratio of means is 1.27, representing a 27%relative reduction in the mean depression score).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Robot-assisted therapy offers a promising approach to neurorehabilitation, particularly for severely to moderately impaired stroke patients. The objective of this study was to investigate the effects of intensive arm training on motor performance in four chronic stroke patients using the robot ARMin II. METHODS: ARMin II is an exoskeleton robot with six degrees of freedom (DOF) moving shoulder, elbow and wrist joints. Four volunteers with chronic (>or= 12 months post-stroke) left side hemi-paresis and different levels of motor severity were enrolled in the study. They received robot-assisted therapy over a period of eight weeks, three to four therapy sessions per week, each session of one hour.Patients 1 and 4 had four one-hour training sessions per week and patients 2 and 3 had three one-hour training sessions per week. Primary outcome variable was the Fugl-Meyer Score of the upper extremity Assessment (FMA), secondary outcomes were the Wolf Motor Function Test (WMFT), the Catherine Bergego Scale (CBS), the Maximal Voluntary Torques (MVTs) and a questionnaire about ADL-tasks, progress, changes, motivation etc. RESULTS: Three out of four patients showed significant improvements (p < 0.05) in the main outcome. The improvements in the FMA scores were aligned with the objective results of MVTs. Most improvements were maintained or even increased from discharge to the six-month follow-up. CONCLUSION: Data clearly indicate that intensive arm therapy with the robot ARMin II can significantly improve motor function of the paretic arm in some stroke patients, even those in a chronic state. The findings of the study provide a basis for a subsequent controlled randomized clinical trial.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Was the spread of agropastoralism from the Fertile Crescent throughout Europe influenced by rapid climatic shifts? We here generate idealized climate events using palaeoclimate records. In a mathematical model of regional sociocultural development, these events disturb the subsistence base of simulated forager and farmer societies. We evaluate the regional simulated transition timings and durations against a published large set of radiocarbon dates for western Eurasia; the model is able to realistically hindcast much of the inhomogeneous space-time evolution of regional Neolithic transitions. Our study shows that the inclusion of climate events improves the simulation of typical lags between cultural complexes, but that the overall difference to a model without climate events is not significant. Climate events may not have been as important for early sociocultural dynamics as endogenous factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Indus Valley Civilization (IVC) was one of the first great civilizations in prehistory. This bronze age civilization flourished from the end of the fourth millennium BC. It disintegrated during the second millennium BC; despite much research effort, this decline is not well understood. Less research has been devoted to the emergence of the IVC, which shows continuous cultural precursors since at least the seventh millennium BC. To understand the decline, we believe it is necessary to investigate the rise of the IVC, i.e., the establishment of agriculture and livestock, dense populations and technological developments 7000-3000 BC. Although much archaeologically typed information is available, our capability to investigate the system is hindered by poorly resolved chronology, and by a lack of field work in the intermediate areas between the Indus valley and Mesopotamia. We thus employ a complementary numerical simulation to develop a consistent picture of technology, agropastoralism and population developments in the IVC domain. Results from this Global Land Use and technological Evolution Simulator show that there is (1) fair agreement between the simulated timing of the agricultural transition and radiocarbon dates from early agricultural sites, but the transition is simulated first in India then Pakistan; (2) an independent agropas- toralism developing on the Indian subcontinent; and (3) a positive relationship between archeological artifact richness and simulated population density which remains to be quantified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple method for efficient inversion of arbitrary radiative transfer models for image analysis is presented. The method operates by representing the shape of the function that maps model parameters to spectral reflectance by an adaptive look-up tree (ALUT) that evenly distributes the discretization error of tabulated reflectances in spectral space. A post-processing step organizes the data into a binary space partitioning tree that facilitates an efficient inversion search algorithm. In an example shallow water remote sensing application, the method performs faster than an implementation of previously published methodology and has the same accuracy in bathymetric retrievals. The method has no user configuration parameters requiring expert knowledge and minimizes the number of forward model runs required, making it highly suitable for routine operational implementation of image analysis methods. For the research community, straightforward and robust inversion allows research to focus on improving the radiative transfer models themselves without the added complication of devising an inversion strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In dieser Arbeit werden dynamisch adaptive Mehrgitterverfahren parallelisiert. Bei dynamisch adaptiven Mehrgitterverfahren wird ein Gebiet mit einem Gitter überdeckt, und auf diesem Gitter wird gerechnet, indem Gitterpunkte in der Umgebung herangezogen werden, um den Wert des nächsten Zeitpunktes zu bestimmen. Dann werden gröbere und feinere Gitter erzeugt und verwendet, wobei die feineren Gitter sich auf Teilgebiete konzentrieren. Diese Teilgebiete ändern sich im Verlauf der Zeit. Durch die Verwendung der zusätzlichen Gitter werden die numerischen Eigenschaften verbessert. Die Parallelisierung solcher Verfahren geschieht in der Regel durch Bisektion. In der vorliegenden Arbeit wird die Umverteilung der Gebiete realisiert, indem Mengen von einzelnen Gitterpunkten verschickt werden. Das ist ein Scheduling-Verfahren. Die Mehrgitterstrukturen sind so aufgebaut, dass fast beliebige Gitterpunktverteilungen auf den Gitterebenen vorliegen können. Die Strukturen werden einmal erzeugt, und nur bei Bedarf geändert, sodass keine Speicherallokationen während der Iterationen nötig sind. Neben dem Gitter sind zusätzliche Strukturen, wie zum Beispiel die Randstrukturen, erforderlich. Eine Struktur Farbenfeld verzeichnet, auf welchem Kern sich ein Außenrandpunkt befindet. In der parallelen adaptiven Verfeinerung werden für einzelne durch ein Entscheidungskriterium ausgewählte Gitterpunkte 5 x 5 Punktüberdeckungen vorgenommen. Dazu werden die verfügbaren Entscheidungsinformationen zur Bestimmung von komplexeren Strukturen herangezogen. Damit muss das Verfeinerungsgitter nicht komplett abgebaut und dann wieder aufgebaut werden, sondern nur die Änderungen am Gitter sind vorzunehmen. Das spart viel Berechnungszeit. Der letzte Schritt besteht darin, den Lastausgleich durchzuführen. Zunächst werden die Lasttransferwerte bestimmt, die angeben, wie viele Gitterpunkte von wo nach wo zu verschicken sind. Das geschieht mit Hilfe einer PLB genannten Methode bzw. einer Variante. PLB wurde bisher vor allem für kombinatorische Probleme eingesetzt. Dann erfolgt eine Auswahl der zu verschickenden Gitterpunkte mit einer Strategie, welche Punkte eines Kerns zu welchen Nachbarkernen transferiert werden sollen. Im letzten Schritt werden schließlich die ausgewählten Punkte migriert, wobei alle Gitterpunktstrukturen umgebaut werden und solche Informationen gepackt werden müssen, sodass ein Umbau seiner Gitterpunktstrukturen bei dem Empfänger möglich wird. Neben den Gitterpunktstrukturen müssen auch Strukturen für die parallele adaptive Verfeinerung verändert werden. Es muss ein Weiterverschicken von Gitterpunkten möglich sein, wenn über die Lastkanten in mehreren Runden Last verschickt wird. Während des Lastausgleichs wird noch Arbeit durch eine Struktur Zwischenkorrektur durchgeführt, die es ermöglicht, das Farbenfeld intakt zu halten, wenn benachbarte Gitterpunkte gleichzeitig verschickt werden.