936 resultados para Thread safe parallel run-time
Resumo:
A high-resolution mtDNA phylogenetic tree allowed us to look backward in time to investigate purifying selection. Purifying selection was very strong in the last 2,500 years, continuously eliminating pathogenic mutations back until the end of the Younger Dryas (∼11,000 years ago), when a large population expansion likely relaxed selection pressure. This was preceded by a phase of stable selection until another relaxation occurred in the out-of-Africa migration. Demography and selection are closely related: expansions led to relaxation of selection and higher pathogenicity mutations significantly decreased the growth of descendants. The only detectible positive selection was the recurrence of highly pathogenic nonsynonymous mutations (m.3394T>C-m.3397A>G-m.3398T>C) at interior branches of the tree, preventing the formation of a dinucleotide STR (TATATA) in the MT-ND1 gene. At the most recent time scale in 124 mother-children transmissions, purifying selection was detectable through the loss of mtDNA variants with high predicted pathogenicity. A few haplogroup-defining sites were also heteroplasmic, agreeing with a significant propensity in 349 positions in the phylogenetic tree to revert back to the ancestral variant. This nonrandom mutation property explains the observation of heteroplasmic mutations at some haplogroup-defining sites in sequencing datasets, which may not indicate poor quality as has been claimed.
Resumo:
Background Despite the small size of the incision, the scar left by open repair of epigastric hernia in children is unaesthetic. Few laparoscopic approaches to epigastric hernia repair have been previously proposed, but none has gain wide acceptance from pediatric surgeons. In this study, we present our experience with a scarless laparo- scopic approach using a percutaneous suturing technique for epigastric hernia repair in children. Methods Ten consecutive patients presenting with epi- gastric hernia 15 mm or further from the umbilicus were submitted to laparoscopic hernia repair. A 5-mm 308-angle laparoscope is introduced through a umbilical trocar and a 3-mm laparoscopic dissector is introduced through a stab incision in the right flank. After opening and dissecting the parietal peritoneum, the fascial defect is identified and closed using 2–0 polyglactin thread through a percutaneous suturing technique. Intraoperative and postoperative clinical data were collected. Results All patients were successfully submitted to la- paroscopic epigastric hernia repair. Median age at surgery was 79 months old and the median distance from the um- bilicus to the epigastric defect was 4 cm. Operative time ranged from 35 to 75 min. Every hernia was successfully closed without any incidents. Follow-up period ranges from 2 to 12 months. No postoperative complications or recurrence was registered. No scar was visible in these patients. Conclusion This scarless laparoscopic technique for epi- gastric hernia repair is safe and reliable. We believe this technique might become gold standard of care in the near future.
Resumo:
OBJECTIVE: To assess the safety and efficacy of unsupervised rehabilitation (USR) in the long run in low-risk patients with coronary artery disease. METHODS: We carried out a retrospective study with 30 patients divided into: group I (GI) - 15 patients from private clinics undergoing unsupervised rehabilitation; group II (GII) - control group, 15 patients from ambulatory clinic basis, paired by age, sex, and clinical findings. GI was stimulated to exercise under indirect supervision (jogging, treadmill, and sports). GII received the usual clinical treatment. RESULTS: The pre- and postobservation values in GI were, respectively: VO2peak (mL/kg/min), 24±5 and 31± 9; VO2 peak/peak HR: 0.18±0.05 and 0.28±0.13; peak double product (DP peak):26,800±7,000 and 29,000 ± 6,500; % peak HR/predicted HRmax: 89.5±9 and 89.3±9. The pre- and post- values in GII were: VO2 peak (mL/kg/min), 27± 7 and 28±5; VO2 peak/peak HR: 0.2±0.06 and 0.2± 0.05; DP peak: 24,900±8,000 and 25,600± 8,000, and % peak HR/predicted HRmax: 91.3±9 and 91.1± 11. The following values were significant: preobservation VO2peak versus postobservation VO2peak in GI (p=0.0 063); postobservation VO2peak in GI versus postobservation VO2peak in GII (p=0.0045); postobservation VO2 peak/peak HR GI versus postobservation peak VO2/peak HR in GII (p=0.0000). The follow-up periods in GI and GII were, respectively, 41.33± 20.19 months and 20.60±8.16 months (p<0.05). No difference between the groups was observed in coronary risk factors, therapeutic management, or evolution of ischemia. No cardiovascular events secondary to USR were observed in 620 patient-months. CONCLUSION: USR was safe and efficient, in low-risk patients with coronary artery disease and provided benefits at the peripheral level.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
Nowadays a huge attention of the academia and research teams is attracted to the potential of the usage of the 60 GHz frequency band in the wireless communications. The use of the 60GHz frequency band offers great possibilities for wide variety of applications that are yet to be implemented. These applications also imply huge implementation challenges. Such example is building a high data rate transceiver which at the same time would have very low power consumption. In this paper we present a prototype of Single Carrier -SC transceiver system, illustrating a brief overview of the baseband design, emphasizing the most important decisions that need to be done. A brief overview of the possible approaches when implementing the equalizer, as the most complex module in the SC transceiver, is also presented. The main focus of this paper is to suggest a parallel architecture for the receiver in a Single Carrier communication system. This would provide higher data rates that the communication system canachieve, for a price of higher power consumption. The suggested architecture of such receiver is illustrated in this paper,giving the results of its implementation in comparison with its corresponding serial implementation.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This note describes ParallelKnoppix, a bootable CD that allows creation of a Linux cluster in very little time. An experienced user can create a cluster ready to execute MPI programs in less than 10 minutes. The computers used may be heterogeneous machines, of the IA-32 architecture. When the cluster is shut down, all machines except one are in their original state, and the last can be returned to its original state by deleting a directory. The system thus provides a means of using non-dedicated computers to create a cluster. An example session is documented.
Resumo:
This paper explores the real exchange rate behavior in Mexico from 1960 until 2005. Since the empirical analysis reveals that the real exchange rate is not mean reverting, we propose that economic fundamental variables affect its evolution in the long-run. Therefore, based on equilibrium exchange rate paradigms, we propose a simple model of real exchange rate determination which includes the relative labor productivity, the real interest rates and the net foreign assets over a long period of time. Our analysis also considers the dynamic adjustment in response to shocks through impulse response functions derived from the multivariate VAR model.
Resumo:
La aparición de nuevos tipos de aplicaciones, como vídeo bajo demanda, realidad virtual y videoconferencias entre otras, caracterizadas por la necesidad de cumplir sus deadlines. Este tipo de aplicaciones, han sido denominadas en la literatura aplicaciones soft-real time (SRT) periódicas. Este trabajo se centra en el problema de la planificación temporal de este nuevo tipo de aplicaciones en clusters no dedicados.
Resumo:
Résumé : Les progrès techniques de la spectrométrie de masse (MS) ont contribué au récent développement de la protéomique. Cette technique peut actuellement détecter, identifier et quantifier des milliers de protéines. Toutefois, elle n'est pas encore assez puissante pour fournir une analyse complète des modifications du protéome corrélées à des phénomènes biologiques. Notre objectif était le développement d'une nouvelle stratégie pour la détection spécifique et la quantification des variations du protéome, basée sur la mesure de la synthèse des protéines plutôt que sur celle de la quantité de protéines totale. Pour cela, nous volions associer le marquage pulsé des protéines par des isotopes stables avec une méthode d'acquisition MS basée sur le balayage des ions précurseurs (precursor ion scan, ou PIS), afin de détecter spécifiquement les protéines ayant intégré les isotopes et d'estimer leur abondance par rapport aux protéines non marquées. Une telle approche peut identifier les protéines avec les plus hauts taux de synthèse dans une période de temps donnée, y compris les protéines dont l'expression augmente spécifiquement suite à un événement précis. Nous avons tout d'abord testé différents acides aminés marqués en combinaison avec des méthodes PIS spécifiques. Ces essais ont permis la détection spécifique des protéines marquées. Cependant, en raison des limitations instrumentales du spectromètre de masse utilisé pour les méthodes PIS, la sensibilité de cette approche s'est révélée être inférieure à une analyse non ciblée réalisée sur un instrument plus récent (Chapitre 2.1). Toutefois, pour l'analyse différentielle de deux milieux de culture conditionnés par des cellules cancéreuses humaines, nous avons utilisé le marquage métabolique pour distinguer les protéines d'origine cellulaire des protéines non marquées du sérum présentes dans les milieux de culture (Chapitre 2.2). Parallèlement, nous avons développé une nouvelle méthode de quantification nommée IBIS, qui utilise des paires d'isotopes stables d'acides aminés capables de produire des ions spécifiques qui peuvent être utilisés pour la quantification relative. La méthode IBIS a été appliquée à l'analyse de deux lignées cellulaires cancéreuses complètement marquées, mais de manière différenciée, par des paires d'acides aminés (Chapitre 2.3). Ensuite, conformément à l'objectif initial de cette thèse, nous avons utilisé une variante pulsée de l'IBIS pour détecter des modifications du protéome dans des cellules HeLa infectée par le virus humain Herpes Simplex-1 (Chapitre 2.4). Ce virus réprime la synthèse des protéines des cellules hôtes afin d'exploiter leur mécanisme de traduction pour la production massive de virions. Comme prévu, de hauts taux de synthèse ont été mesurés pour les protéines virales détectées, attestant de leur haut niveau d'expression. Nous avons de plus identifié un certain nombre de protéines humaines dont le rapport de synthèse et de dégradation (S/D) a été modifié par l'infection virale, ce qui peut donner des indications sur les stratégies utilisées par les virus pour détourner la machinerie cellulaire. En conclusion, nous avons montré dans ce travail que le marquage métabolique peut être employé de façon non conventionnelle pour étudier des dimensions peu explorées en protéomique. Summary : In recent years major technical advancements greatly supported the development of mass spectrometry (MS)-based proteomics. Currently, this technique can efficiently detect, identify and quantify thousands of proteins. However, it is not yet sufficiently powerful to provide a comprehensive analysis of the proteome changes correlated with biological phenomena. The aim of our project was the development of ~a new strategy for the specific detection and quantification of proteomé variations based on measurements of protein synthesis rather than total protein amounts. The rationale for this approach was that changes in protein synthesis more closely reflect dynamic cellular responses than changes in total protein concentrations. Our starting idea was to couple "pulsed" stable-isotope labeling of proteins with a specific MS acquisition method based on precursor ion scan (PIS), to specifically detect proteins that incorporated the label and to simultaneously estimate their abundance, relative to the unlabeled protein isoform. Such approach could highlight proteins with the highest synthesis rate in a given time frame, including proteins specifically up-regulated by a given biological stimulus. As a first step, we tested different isotope-labeled amino acids in combination with dedicated PIS methods and showed that this leads to specific detection of labeled proteins. Sensitivity, however, turned out to be lower than an untargeted analysis run on a more recent instrument, due to MS hardware limitations (Chapter 2.1). We next used metabolic labeling to distinguish the proteins of cellular origin from a high background of unlabeled (serum) proteins, for the differential analysis of two serum-containing culture media conditioned by labeled human cancer cells (Chapter 2.2). As a parallel project we developed a new quantification method (named ISIS), which uses pairs of stable-isotope labeled amino acids able to produce specific reporter ions, which can be used for relative quantification. The ISIS method was applied to the analysis of two fully, yet differentially labeled cancer cell lines, as described in Chapter 2.3. Next, in line with the original purpose of this thesis, we used a "pulsed" variant of ISIS to detect proteome changes in HeLa cells after the infection with human Herpes Simplex Virus-1 (Chapter 2.4). This virus is known to repress the synthesis of host cell proteins to exploit the translation machinery for the massive production of virions. As expected, high synthesis rates were measured for the detected viral proteins, confirming their up-regulation. Moreover, we identified a number of human proteins whose synthesis/degradation ratio (S/D) was affected by the viral infection and which could provide clues on the strategies used by the virus to hijack the cellular machinery. Overall, in this work, we showed that metabolic labeling can be employed in alternative ways to investigate poorly explored dimensions in proteomics.
Resumo:
Introduction: Renal transplantation is considered the treatment of choice for end-stage renal disease. However, the association of occlusive aorto-iliac disease and chronic renal failure is frequent and aorto-iliac reconstruction may be necessary prior to renal transplantation. This retrospective study reviews the results of this operative strategy.Material and Methods: Between January 2001 and June 2010, 309 patients underwent renal transplantation at our institution and 8 patients had prior aorto-iliac reconstruction using prosthetic material. There were 6 men and 2 women with a median age of 62 years (range 51-70). Five aorto-bifemoral and 2 aorto-bi-iliac bypasses were performed for stage II (n=5), stage IV (n=1) and aortic aneurysm (n=1). In one patient, iliac kissing stents and an ilio-femoral bypass were implanted. 4 cadaveric and 4 living donor renal transplantations were performed with an interval of 2 months to 10 years after revascularization.The results were analysed with respect of graft and patients survival. Differences between groups were tested by the log rank method.Results: No complications and no death occurred in the post-operative period. All bypasses remained patent during follow-up. The median time of post transplantation follow-up was 46 months for all patients and 27 months for patients with prior revascularization. In the revascularized group and control group, the graft and patient survival at 1 year were respectively 100%/96%, 100%/99% and at 5 years 86%/86%, 86%/94%, without significant differences between both groups.Discussion: Our results suggest that renal transplantation following prior aorto-iliac revascularisation with prosthetic material is safe and effective. Patients with end-stage renal disease and concomitant aorto-iliac disease should therefore be considered for renal transplantation. However, caution in the interpretation of the results is indicated due to the small sample size of our study.
Resumo:
We examine the long run relationship between stock prices and goods prices to gauge whether stock market investment can hedge against inflation. Data from sixteen OECD countries over the period 1970-2006 are used. We account for different inflation regimes with the use of sub-sample regressions, whilst maintaining the power of tests in small sample sizes by combining time-series data across our sample countries in a panel unit root and panel cointegration econometric framework. The evidence supports a positive long-run relationship between goods prices and stock prices with the estimated goods price coefficient being in line with the generalized Fisher hypothesis.
Resumo:
Research into the biomechanical manifestation of fatigue during exhaustive runs is increasingly popular but additional understanding of the adaptation of the spring-mass behaviour during the course of strenuous, self-paced exercises continues to be a challenge in order to develop optimized training and injury prevention programs. This study investigated continuous changes in running mechanics and spring-mass behaviour during a 5-km run. 12 competitive triathletes performed a 5-km running time trial (mean performance: 17 min 30 s) on a 200 m indoor track. Vertical and anterior-posterior ground reaction forces were measured every 200 m by a 5-m long force platform system, and used to determine spring-mass model characteristics. After a fast start, running velocity progressively decreased (- 11.6%; P<0.001) in the middle part of the race before an end spurt in the final 400-600 m. Stride length (- 7.4%; P<0.001) and frequency (- 4.1%; P=0.001) decreased over the 25 laps, while contact time (+ 8.9%; P<0.001) and total stride duration (+ 4.1%; P<0.001) progressively lengthened. Peak vertical forces (- 2.0%; P<0.01) and leg compression (- 4.3%; P<0.05), but not centre of mass vertical displacement (+ 3.2%; P>0.05), decreased with time. As a result, vertical stiffness decreased (- 6.0%; P<0.001) during the run, whereas leg stiffness changes were not significant (+ 1.3%; P>0.05). Spring-mass behaviour progressively changes during a 5-km time trial towards deteriorated vertical stiffness, which alters impact and force production characteristics.
Resumo:
This paper studies the aggregate and distributional implications of Markov-perfect tax-spending policy in a neoclassical growth model with capitalists and workers. Focusing on the long run, our main fi ndings are: (i) it is optimal for a benevolent government, which cares equally about its citizens, to tax capital heavily and to subsidise labour; (ii) a Pareto improving means to reduce ine¢ ciently high capital taxation under discretion is for the government to place greater weight on the welfare of capitalists; (iii) capitalists and workers preferences, regarding the optimal amount of "capitalist bias", are not aligned implying a conflict of interests.
Resumo:
A major initiative of the Thatcher and Major Conservative administrations was that public sector ancillary and professional services provided by incumbent direct service organisations [DSOs] be put out to tender. Analyses of this initiative, in the UK and elsewhere, found costs were often reduced in the short run. However, few if any studies went beyond the first round of tendering. We analyze data collected over successive rounds of tendering for cleaning and catering services of Scottish hospitals in order to assess the long term consequences of this initiative. The experience of the two services was very different. Cost savings for cleaning services tended to increase with each additional round of tendering and became increasingly stable. In accordance with previous results in the literature, DSOs produced smaller cost reductions than private contractors: probably an inevitable consequence of the tendering process at the time. Cost savings from DSOs tended to disappear during the first round of tendering, but they appear to have been more permanent in successive rounds. Cost savings for catering, on the other hand, tended to be much smaller, and these were not sustained.