993 resultados para Grew, Nehemiah, 1641-1712


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A single thallus of the rare red seaweed Tsengia bairdii (Farlow) K. Fan et Y.Fan( Platoma bairdii (Farlow) Kuckuck) (Nemastomataceae) was collected on a subtidal pebble on the west coast of Scotland. The terete gelatinous axes, which were only 7 mm high, were monoecious. They bore numerous cystocarps and a few spermatangia, which represent the first observation of male structures in this genus. Released carpospores grew into expanded basal discs that gave rise to erect axes bearing irregularly cruciate tetrasporangia. irregularly cruciate to zonate tetrasporangia were also formed on these basal discs. Karyological studies on dividing tetrasporocytes showed about 25 bodies, identified as paired meiotic chromosomes on the basis of their size in comparison to mitotic and meiotic chromosomes in other red algal species. These observations confirm the isomorphic life history inferred from early field collections and show that this species is monoecious. Tsengia bairdii is an extremely rare seaweed in Europe - it seems to be confined to sublittoral cobbles and has a temporally patchy distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A population of Gelidium latifolium (Greville) Bornet et Thuret (Rhodophyta) from Portstewart, County Antrim, Northern Ireland, was dominated by tetrasporophytes. When grown in culture, excised tips from 10 non-reproductive individuals all formed tetrasporangial branches. Chromosome counts in mitotic nuclei of vegetative cells from cultured tetrasporophytic apices were 58 +/- 4 chromosomes. In nuclei of dividing tetrasporocytes there were 29 +/- 2 larger bodies that were interpreted as paired meiotic chromosomes. Field-collected tetrasporophytes from Islandmagee. County Antrim. also showed approximately 29 pairs of chromosomes during meiosis in tetrasporocytes, This is the first report of meiosis in G. latifolium and the first direct demonstration of meiosis in this commercially important genus. In germinating tetraspores, the haploid nucleus initially divided prior to or during formation of the germination tube. The two daughter nuclei then underwent synchronous mitoses to form four haploid nuclei (n = 29 +/- 2), only one of which entered the germination tube. The sporeling survival rate was low, and few plants grew to maturity. The largest of these was diploid, with 55-58 chromosomes, and formed spermatangia after 14 months in culture. Other plants, which were abnormally bushy and densely branched, failed to reproduce. Since the most vigorous individual (and possibly also the other survivors) had apparently diploidized spontaneously during development, it is possible that the lack of gametophytes in the local G. latifolium population results from poor viability of haploid sporelings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simultaneous multithreading processors dynamically share processor resources between multiple threads. In general, shared SMT resources may be managed explicitly, for instance, by dynamically setting queue occupation bounds for each thread as in the DCRA and Hill-Climbing policies. Alternatively, resources may be managed implicitly; that is, resource usage is controlled by placing the desired instruction mix in the resources. In this case, the main resource management tool is the instruction fetch policy which must predict the behavior of each thread (branch mispredictions, long-latency loads, etc.) as it fetches instructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.

This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Branch prediction feeds a speculative execution processor core with instructions. Branch mispredictions are inevitable and have negative effects on performance and energy consumption. With the advent of highly accurate conditional branch predictors, nonconditional branch instructions are gaining importance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a result of resource limitations, state in branch predictors is frequently shared between uncorrelated branches. This interference can significantly limit prediction accuracy. In current predictor designs, the branches sharing prediction information are determined by their branch addresses and thus branch groups are arbitrarily chosen during compilation. This feasibility study explores a more analytic and systematic approach to classify branches into clusters with similar behavioral characteristics. We present several ways to incorporate this cluster information as an additional information source in branch predictors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MinneSPEC proposes reduced input sets that microprocessor designers can use to model representative short-running workloads. A four-step methodology verifies the program behavior similarity of these input sets to reference sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Randomising set index functions can reduce the number of conflict misses in data caches by spreading the cache blocks uniformly over all sets. Typically, the randomisation functions compute the exclusive ors of several address bits. Not all randomising set index functions perform equally well, which calls for the evaluation of many set index functions. This paper discusses and improves a technique that tackles this problem by predicting the miss rate incurred by a randomisation function, based on profiling information. A new way of looking at randomisation functions is used, namely the null space of the randomisation function. The members of the null space describe pairs of cache blocks that are mapped to the same set. This paper presents an analytical model of the error made by the technique and uses this to propose several optimisations to the technique. The technique is then applied to generate a conflict-free randomisation function for the SPEC benchmarks. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changes to software requirements not only pose a risk to the successful delivery of software applications but also provide opportunity for improved usability and value. Increased understanding of the causes and consequences of change can support requirements management and also make progress towards the goal of change anticipation. This paper presents the results of two case studies that address objectives arising from that ultimate goal. The first case study evaluated the potential of a change source taxonomy containing the elements ‘market’, ‘organisation’, ‘vision’, ‘specification’, and ‘solution’ to provide a meaningful basis for change classification and measurement. The second case study investigated whether the requirements attributes of novelty, complexity, and dependency correlated with requirements volatility. While insufficiency of data in the first case study precluded an investigation of changes arising due to the change source of ‘market’, for the remainder of the change sources, results indicate a significant difference in cost, value to the customer and management considerations. Findings show that higher cost and value changes arose more often from ‘organisation’ and ‘vision’ sources; these changes also generally involved the co-operation of more stakeholder groups and were considered to be less controllable than changes arising from the ‘specification’ or ‘solution’ sources. Results from the second case study indicate that only ‘requirements dependency’ is consistently correlated with volatility and that changes coming from each change source affect different groups of requirements. We conclude that the taxonomy can provide a meaningful means of change classification, but that a single requirement attribute is insufficient for change prediction. A theoretical causal account of requirements change is drawn from the implications of the combined results of the two case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a promising method for pattern recognition and function estimation, least squares support vector machines (LS-SVM) express the training in terms of solving a linear system instead of a quadratic programming problem as for conventional support vector machines (SVM). In this paper, by using the information provided by the equality constraint, we transform the minimization problem with a single equality constraint in LS-SVM into an unconstrained minimization problem, then propose reduced formulations for LS-SVM. By introducing this transformation, the times of using conjugate gradient (CG) method, which is a greatly time-consuming step in obtaining the numerical solution, are reduced to one instead of two as proposed by Suykens et al. (1999). The comparison on computational speed of our method with the CG method proposed by Suykens et al. and the first order and second order SMO methods on several benchmark data sets shows a reduction of training time by up to 44%. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitride-strengthened, reduced activation, martensitic steel is anticipated to have higher creep strength because of the remarkable thermal stability of nitrides. Two nitride-strengthened, reduced activation martensitic steels with different carbon contents were prepared to investigate the microstructure and mechanical property changes with decreasing carbon content. It has been found that both steels had the microstructure of full martensite with fine nitrides dispersed homogeneously in the matrix and displayed extremely high strength but poor toughness. Compared with the steel with low carbon content (0.005 pct in wt pct), the steel with high carbon content (0.012 pct in wt pct) had not only the higher strength but also the higher impact toughness and grain coarsening temperature, which was related to the carbon content. On the one hand, carbon reduction led to Ta-rich inclusions; on the other hand, the grain grew larger when normalized at high temperature because of the absence of Ta carbonitrides, which would decrease impact toughness. The complicated Al2O3 inclusions in the two steels have been revealed to be responsible for the initiated cleavage fracture by acting as the critical cracks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a data flow based run time system as an efficient tool for supporting execution of parallel code on heterogeneous architectures hosting both multicore CPUs and GPUs. We discuss how the proposed run time system may be the target of both structured parallel applications developed using algorithmic skeletons/parallel design patterns and also more "domain specific" programming models. Experimental results demonstrating the feasibility of the approach are presented. © 2012 World Scientific Publishing Company.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hopanoids are pentacyclic triterpenoids that are thought to be bacterial surrogates for eukaryotic sterols, such as cholesterol, acting to stabilize membranes and to regulate their fluidity and permeability. To date, very few studies have evaluated the role of hopanoids in bacterial physiology. The synthesis of hopanoids depends on the enzyme squalene-hopene cyclase (Shc), which converts the linear squalene into the basic hopene structure. Deletion of the 2 genes encoding Shc enzymes in Burkholderia cenocepacia K56-2, BCAM2831 and BCAS0167, resulted in a strain that was unable to produce hopanoids, as demonstrated by gas chromatography and mass spectrometry. Complementation of the Delta shc mutant with only BCAM2831 was sufficient to restore hopanoid production to wild-type levels, while introducing a copy of BCAS0167 alone into the Delta shc mutant produced only very small amounts of the hopanoid peak. The Delta shc mutant grew as well as the wild type in medium buffered to pH 7 and demonstrated no defect in its ability to survive and replicate within macrophages, despite transmission electron microscopy (TEM) revealing defects in the organization of the cell envelope. The Delta shc mutant displayed increased sensitivity to low pH, detergent, and various antibiotics, including polymyxin B and erythromycin. Loss of hopanoid production also resulted in severe defects in both swimming and swarming motility. This suggests that hopanoid production plays an important role in the physiology of B. cenocepacia.