930 resultados para Nested Parallelism
Resumo:
HIV-infected women are at increased risk of cervical intra-epithelial neoplasia (CIN) and invasive cervical cancer (ICC), but it has been difficult to disentangle the influences of heavy exposure to HPV infection, inadequate screening, and immunodeficiency. A case-control study including 364 CIN2/3 and 20 ICC cases matched to 1,147 controls was nested in the Swiss HIV Cohort Study (1985-2013). CIN2/3 risk was significantly associated with low CD4+ cell counts, whether measured as nadir (odds ratio (OR) per 100-cell/μL decrease=1.15, 95% CI: 1.08, 1.22), or at CIN2/3 diagnosis (1.10, 95% CI: 1.04, 1.16). An association was evident even for nadir CD4+ 200-349 versus ≥350 cells/μL (OR=1.57, 95% CI: 1.09, 2.25). After adjustment for nadir CD4+, a protective effect of >2-year cART use was seen against CIN2/3 (OR versus never cART use=0.64, 95% CI: 0.42, 0.98). Despite low study power, similar associations were seen for ICC, notably with nadir CD4+ (OR for 50 versus >350 cells/μL= 11.10, 95% CI: 1.24, 100). HPV16-L1 antibodies were significantly associated with CIN2/3, but HPV16-E6 antibodies were nearly exclusively detected in ICC. In conclusion, worsening immunodeficiency, even at only moderately decreased CD4+ cell counts (200-349 CD4+ cells/μL), is a significant risk factor for CIN2/3 and cervical cancer. This article is protected by copyright. All rights reserved.
Resumo:
INTRODUCTION Known genetic variants with reference to preeclampsia only explain a proportion of the heritable contribution to the development of this condition. The association between preeclampsia and the risk of cardiovascular disease later in life has encouraged the study of genetic variants important in thrombosis and vascular inflammation also in relation to preeclampsia. The von Willebrand factor-cleaving protease, ADAMTS13, plays an important role in micro vascular thrombosis, and partial deficiencies of this enzyme have been observed in association with cardiovascular disease and preeclampsia. However, it remains unknown whether decreased ADAMTS13 levels represent a cause or an effect of the event in placental and cardiovascular disease. METHODS We studied the distribution of three functional genetic variants of ADAMTS13, c.1852C>G (rs28647808), c.4143_4144dupA (rs387906343), and c.3178C>T (rs142572218) in women with preeclampsia and their controls in a nested case-control study from the second Nord-Trøndelag Health Study (HUNT2). We also studied the association between ADAMTS13 activity and preeclampsia, in serum samples procured unrelated in time of the preeclamptic pregnancy. RESULTS No differences were observed in genotype, allele or haplotype frequencies of the different ADAMTS13 variants when comparing cases and controls, and no association to preeclampsia was found with lower levels of ADAMTS13 activity. CONCLUSION Our findings indicate that ADAMTS13 variants and ADAMTS13 activity do not contribute to an increased risk of preeclampsia in the general population.
Resumo:
A nested case-control study design was used to investigate the relationship between radiation exposure and brain cancer risk in the United States Air Force (USAF). The cohort consisted of approximately 880,000 men with at least 1 year of service between 1970 and 1989. Two hundred and thirty cases were identified from hospital discharge records with a diagnosis of primary malignant brain tumor (International Classification of Diseases, 9th revision, code 191). Four controls were exactly matched with each case on year of age and race using incidence density sampling. Potential career summary extremely low frequency (ELF) and microwave-radiofrequency (MWRF) radiation exposures were based upon the duration in each occupation and an intensity score assigned by an expert panel. Ionizing radiation (IR) exposures were obtained from personal dosimetry records.^ Relative to the unexposed, the overall age-race adjusted odds ratio (OR) for ELF exposure was 1.39, 95 percent confidence interval (CI) 1.03-1.88. A dose-response was not evident. The same was true for MWRF, although the OR = 1.59, with 95 percent CI 1.18-2.16. Excess risk was not found for IR exposure (OR = 0.66, 45 percent CI 0.26-1.72).^ Increasing socioeconomic status (SES), as identified by military pay grade, was associated with elevated brain tumor risk (officer vs. enlisted personnel age-race adjusted OR = 2.11, 95 percent CI 1.98-3.01, and senior officers vs. all others age-race adjusted OR = 3.30, 95 percent CI 2.0-5.46). SES proved to be an important confounder of the brain tumor risk associated with ELF and MWRF exposure. For ELF, the age-race-SES adjusted OR = 1.28, 95 percent CI 0.94-1.74, and for MWRF, the age-race-SES adjusted OR = 1.39, 95 percent CI 1.01-1.90.^ These results indicate that employment in Air Force occupations with potential electromagnetic field exposures is weakly, though not significantly, associated with increased risk for brain tumors. SES appeared to be the most consistent brain tumor risk factor in the USAF cohort. Other investigators have suggested that an association between brain tumor risk and SES may arise from differential access to medical care. However, in the USAF cohort health care is universally available. This study suggests that some factor other than access to medical care must underlie the association between SES and brain tumor risk. ^
Resumo:
A nested ice flow model was developed for eastern Dronning Maud Land to assist with the dating and interpretation of the EDML deep ice core. The model consists of a high-resolution higher-order ice dynamic flow model that was nested into a comprehensive 3-D thermomechanical model of the whole Antarctic ice sheet. As the drill site is on a flank position the calculations specifically take into account the effects of horizontal advection as deeper ice in the core originated from higher inland. First the regional velocity field and ice sheet geometry is obtained from a forward experiment over the last 8 glacial cycles. The result is subsequently employed in a Lagrangian backtracing algorithm to provide particle paths back to their time and place of deposition. The procedure directly yields the depth-age distribution, surface conditions at particle origin, and a suite of relevant parameters such as initial annual layer thickness. This paper discusses the method and the main results of the experiment, including the ice core chronology, the non-climatic corrections needed to extract the climatic part of the signal, and the thinning function. The focus is on the upper 89% of the ice core (appr. 170 kyears) as the dating below that is increasingly less robust owing to the unknown value of the geothermal heat flux. It is found that the temperature biases resulting from variations of surface elevation are up to half of the magnitude of the climatic changes themselves.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also,a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with medium to large parallel execution overheads.
Resumo:
A framework for the automatic parallelization of (constraint) logic programs is proposed and proved correct. Intuitively, the parallelization process replaces conjunctions of literals with parallel expressions. Such expressions trigger at run-time the exploitation of restricted, goal-level, independent and-parallelism. The parallelization process performs two steps. The first one builds a conditional dependency graph (which can be implified using compile-time analysis information), while the second transforms the resulting graph into linear conditional expressions, the parallel expressions of the &-Prolog language. Several heuristic algorithms for the latter ("annotation") process are proposed and proved correct. Algorithms are also given which determine if there is any loss of parallelism in the linearization process with respect to a proposed notion of maximal parallelism. Finally, a system is presented which implements the proposed approach. The performance of the different annotation algorithms is compared experimentally in this system by studying the time spent in parallelization and the effectiveness of the results in terms of speedups.
Resumo:
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs (including that exploited by data parallelism techniques). Data parallelism techniques on the other hand offer increased performance for a restricted class of programs. The thesis of this paper is that these two forms of parallelism are not fundamentally different and that relating them opens the possibility of obtaining the advantages of both within the same system. Some relevant issues are discussed and solutions proposed. The discussion is illustrated through visualizations of actual parallel executions implementing the ideas proposed.
Resumo:
Although studies of a number of parallel implementations of logic programming languages are now available, their results are difficult to interpret due to the multiplicity of factors involved, the effect of each of which is difficult to sepárate. In this paper we present the results of a high-level simulation study of or- and independent and-parallelism with a wide selection of Prolog programs that aims to determine the intrinsic amount of parallelism, independently of implementation factors, thus facilitating this separation. We expect this study will be instrumental in better understanding and comparing results from actual implementations, as shown by some examples provided in the paper. In addition, the paper examines some of the issues and tradeoffs associated with the combination of and- and or-parallelism and proposes reasonable solutions based on the simulation data obtained.
Resumo:
The &-Prolog system, a practical implementation of a parallel execution niodel for Prolog exploiting strict and non-strict independent and-parallelism, is described. Both automatic and manual parallelization of programs is supported. This description includes a summary of the system's language and architecture, some details of its execution model (based on the RAP-WAM model), and data on its performance on sequential workstations and shared memory multiprocessors, which is compared to that of current Prolog systems. The results to date show significant speed advantages over state-of-the-art sequential systems.
Resumo:
This paper presents some fundamental properties of independent and-parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency discussed in the light of this model. Two conditions, "strict" and "non-strict" independence, are defined and then proved sufficient to ensure correctness and efñciency of parallel execution: if goals which meet these conditions are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. Also, in absence of failure, the parallel proof procedure does not genérate any additional work (with respect to standard SLD-resolution) while the actual execution time is reduced. Finally, in case of failure of any of the goals no slow down will occur. For strict independence the results are shown to hold independently of whether the parallel goals execute in the same environment or in sepárate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: compile-time conditions to efficiently check goal independence at run-time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler.
Resumo:
We present two new algorithms which perform automatic parallelization via source-to-source transformations. The objective is to exploit goal-level, unrestricted independent and-parallelism. The proposed algorithms use as targets new parallel execution primitives which are simpler and more flexible than the well-known &/2 parallel operator. This makes it possible to genérate better parallel expressions by exposing more potential parallelism among the literals of a clause than is possible with &/2. The difference between the two algorithms stems from whether the order of the solutions obtained is preserved or not. We also report on a preliminary evaluation of an implementation of our approach. We compare the performance obtained to that of previous annotation algorithms and show that relevant improvements can be obtained.
Resumo:
In this paper we propose a complete scheme for automatic exploitation of independent and-parallelism in CLP programs. We first discuss the new problems involved because of the different properties of the independence notions applicable to CLP. We then show how independence can be derived from a number of standard analysis domains for CLP. Finally, we perform a preliminary evaluation of the efficiency, accuracy, and effectiveness of the approach by implementing a parallehzing compiler for CLP based on the proposed ideas and applying it on a number of CLP benchmarks.
Resumo:
Several types of parallelism can be exploited in logic programs while preserving correctness and efficiency, i.e. ensuring that the parallel execution obtains the same results as the sequential one and the amount of work performed is not greater. However, such results do not take into account a number of overheads which appear in practice, such as process creation and scheduling, which can induce a slow-down, or, at least, limit speedup, if they are not controlled in some way. This paper describes a methodology whereby the granularity of parallel tasks, i.e. the work available under them, is efficiently estimated and used to limit parallelism so that the effect of such overheads is controlled. The run-time overhead associated with the approach is usually quite small, since as much work is done at compile time as possible. Also, a number of run-time optimizations are proposed. Moreover, a static analysis of the overhead associated with the granularity control process is performed in order to decide its convenience. The performance improvements resulting from the incorporation of grain size control are shown to be quite good, specially for systems with médium to large parallel execution overheads.
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper filis this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compiletime/ run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, aimed at performing as much work as possible at compile-time. The approach is based on the knowledge of certain properties regarding the run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed. Thus, the paper does not deal with the analysis itself, but rather with how the analysis results can be used to parallelize programs.
Resumo:
Andorra-I is the first implementation of a language based on the Andorra Principie, which states that determinate goals can (and shonld) be run before other goals, and even in a parallel fashion. This principie has materialized in a framework called the Basic Andorra model, which allows or-parallelism as well as (dependent) and-parallelism for determinate goals. In this report we show that it is possible to further extend this model in order to allow general independent and-parallelism for nondeterminate goals, withont greatly modifying the underlying implementation machinery. A simple an easy way to realize such an extensión is to make each (nondeterminate) independent goal determinate, by using a special "bagof" constract. We also show that this can be achieved antomatically by compile-time translation from original Prolog programs. A transformation that fulfüls this objective and which can be easily antomated is presented in this report.