986 resultados para Parallel methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic iron minerals are widespread and indicative sediment constituents in estuarine, coastal and shelf systems. We combine environmental magnetic, sedimentological and numerical methods to identify magnetite-enriched placer-like zones in a complex coastal system and delineate their formation mechanisms. Magnetic susceptibility and remanence measurements on 245 surficial sediment samples collected in and around Tauranga Harbour, the largest barrier-enclosed tidal estuary of New Zealand, reveal several discrete enrichment zones controlled by local hydrodynamic conditions. Active magnetite enrichment takes place in tidal channels, which feed into two coast-parallel nearshore magnetite-enriched belts centered at water depths of 6-10 m and 10-20 m. A close correlation between magnetite content and magnetic grain size was found, where higher susceptibility values are associated within coarser magnetic crystal sizes. Two key mechanisms for magnetite enrichment are identified. First, tide-induced residual currents primarily enable magnetite enrichment within the estuarine channel network. A coast-parallel, fine sand magnetite enrichment belt in water depths of less than 10 m along the barrier island has a strong decrease in magnetite content away from the southern tidal inlet and is apparently related to active coast-parallel transport combined with mobilizing surf zone processes. A second, less pronounced, but more uniform magnetite enrichment belt at 10-20 m water depth is composed of non-mobile, medium-coarse-grained relict sands, which have been reworked during post-glacial sea level transgression. We demonstrate the potential of magnetic methods to reveal and differentiate coastal magnetite enrichment patterns and investigate their formative mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study of the effectiveness of three different algorithms for the parallelization of logic programs based on compile-time detection of independence among goals. The algorithms are embedded in a complete parallelizing compiler, which incorporates different abstract interpretation-based program analyses. The complete system shows the task of automatic program parallelization to be practical. The trade-offs involved in using each of the algorithms in this task are studied experimentally, weaknesses of these identified, and possible improvements discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been significant interest in parallel execution models for logic programs which exploit Independent And-Parallelism (IAP). In these models, it is necessary to determine which goals are independent and therefore eligible for parallel execution and which goals have to wait for which others during execution. Although this can be done at run-time, it can imply a very heavy overhead. In this paper, we present three algorithms for automatic compiletime parallelization of logic programs using IAP. This is done by converting a clause into a graph-based computational form and then transforming this graph into linear expressions based on &-Prolog, a language for IAP. We also present an algorithm which, given a clause, determines if there is any loss of parallelism due to linearization, for the case in which only unconditional parallelism is desired. Finally, the performance of these annotation algorithms is discussed for some benchmark programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been shown that it is possible to exploit Independent/Restricted And-parallelism in logic programs while retaining the conventional "don't know" semantics of such programs. In particular, it is possible to parallelize pure Prolog programs while maintaining the semantics of the language. However, when builtin side-effects (such as write or assert) appear in the program, if an identical observable behaviour to that of sequential Prolog implementations is to be preserved, such side-effects have to be properly sequenced. Previously proposed solutions to this problem are either incomplete (lacking, for example, backtracking semantics) or they force sequentialization of significant portions of the execution graph which could otherwise run in parallel. In this paper a series of side-effect synchronization methods are proposed which incur lower overhead and allow more parallelism than those previously proposed. Most importantly, and unlike previous proposals, they have well-defined backward execution behaviour and require only a small modification to a given (And-parallel) Prolog implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When non linear physical systems of infinite extent are modelled, such as tunnels and perforations, it is necessary to simulate suitably the solution in the infinite as well as the non linearity. The finite element method (FEM) is a well known procedure for simulating the non linear behavior. However, the treatment of the infinite field with domain truncations is often questionable. On the other hand, the boundary element method (BEM) is suitable to simulate the infinite behavior without truncations. Because of this, by the combination of both methods, suitable use of the advantages of each one may be obtained. Several possibilities of FEM-BEM coupling and their performance in some practical cases are discussed in this paper. Parallelizable coupling algorithms based on domain decomposition are developed and compared with the most traditional coupling methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aims to introduce some fundamental concepts underlying option valuation theory including implementation of computational tools. In many cases analytical solution for option pricing does not exist, thus the following numerical methods are used: binomial trees, Monte Carlo simulations and finite difference methods. First, an algorithm based on Hull and Wilmott is written for every method. Then these algorithms are improved in different ways. For the binomial tree both speed and memory usage is significantly improved by using only one vector instead of a whole price storing matrix. Computational time in Monte Carlo simulations is reduced by implementing a parallel algorithm (in C) which is capable of improving speed by a factor which equals the number of processors used. Furthermore, MatLab code for Monte Carlo was made faster by vectorizing simulation process. Finally, obtained option values are compared to those obtained with popular finite difference methods, and it is discussed which of the algorithms is more appropriate for which purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pathognomonic plaques of Alzheimer’s disease are composed primarily of the 39- to 43-aa β-amyloid (Aβ) peptide. Crosslinking of Aβ peptides by tissue transglutaminase (tTg) indicates that Gln15 of one peptide is proximate to Lys16 of another in aggregated Aβ. Here we report how the fibril structure is resolved by mapping interstrand distances in this core region of the Aβ peptide chain with solid-state NMR. Isotopic substitution provides the source points for measuring distances in aggregated Aβ. Peptides containing a single carbonyl 13C label at Gln15, Lys16, Leu17, or Val18 were synthesized and evaluated by NMR dipolar recoupling methods for the measurement of interpeptide distances to a resolution of 0.2 Å. Analysis of these data establish that this central core of Aβ consists of a parallel β-sheet structure in which identical residues on adjacent chains are aligned directly, i.e., in register. Our data, in conjunction with existing structural data, establish that the Aβ fibril is a hydrogen-bonded, parallel β-sheet defining the long axis of the Aβ fibril propagation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe an hybrid algorithm for an even number of processors based on an algorithm for two processors and the Overlapping Partition Method for tridiagonal systems. Moreover, we compare this hybrid method with the Partition Wang’s method in a BSP computer. Finally, we compare the theoretical computation cost of both methods for a Cray T3D computer, using the cost model that BSP model provides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The application of therapeutic hypothermia (TH) for 12 to 24 hours following out-of-hospital cardiac arrest (OHCA) has been associated with decreased mortality and improved neurological function. However, the optimal duration of cooling is not known. We aimed to investigate whether targeted temperature management (TTM) at 33 ± 1 °C for 48 hours compared to 24 hours results in a better long-term neurological outcome. METHODS The TTH48 trial is an investigator-initiated pragmatic international trial in which patients resuscitated from OHCA are randomised to TTM at 33 ± 1 °C for either 24 or 48 hours. Inclusion criteria are: age older than 17 and below 80 years; presumed cardiac origin of arrest; and Glasgow Coma Score (GCS) <8, on admission. The primary outcome is neurological outcome at 6 months using the Cerebral Performance Category score (CPC) by an assessor blinded to treatment allocation and dichotomised to good (CPC 1-2) or poor (CPC 3-5) outcome. Secondary outcomes are: 6-month mortality, incidence of infection, bleeding and organ failure and CPC at hospital discharge, at day 28 and at day 90 following OHCA. Assuming that 50 % of the patients treated for 24 hours will have a poor outcome at 6 months, a study including 350 patients (175/arm) will have 80 % power (with a significance level of 5 %) to detect an absolute 15 % difference in primary outcome between treatment groups. A safety interim analysis was performed after the inclusion of 175 patients. DISCUSSION This is the first randomised trial to investigate the effect of the duration of TTM at 33 ± 1 °C in adult OHCA patients. We anticipate that the results of this trial will add significant knowledge regarding the management of cooling procedures in OHCA patients. TRIAL REGISTRATION NCT01689077.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To investigate the efficacy and tolerability of a course of 5 injections of hyaluronan (HA) given at intervals of one week in patients with symptomatic, mild to moderate osteoarthritis (OA) of the knee. Methods: A double blind, randomized, parallel group, multicenter (17 centers), saline vehicle-controlled study was conducted over 18 weeks. Patients received either 25 mg (2.5 ml) HA in a phosphate buffered solution or 2.5 ml vehicle containing only the buffer by intraarticular injection. Five injections were given at one week intervals and the patients were followed for a further 13 weeks. The Western Ontario McMaster (WOMAC) OA instrument was used as the primary efficacy variable and repeated measures analysis of covariance was used to compare the 2 treatments over Weeks 6, 10, 14, and 18. Results. Of 240 patients randomized for inclusion in the study, 223 were evaluable for the modified intention to treat analysis. The active treatment and control groups were comparable for demographic details, OA history, and previous treatments. Scores for the pain and stiffness subscales of the WOMAC were modestly but significantly lower in the HA-treated group overall (Weeks 6 to 18; p < 0.05) and the statistically significant difference from the control was not apparent until after the series of injections was complete. The physical function subscale did not reach statistical significance (p = 0.064). Tolerability of the procedure was good and there were no serious adverse events that were considered to have a possible causal relationship with the study treatment. Conclusion. Intraarticular HA treatment was significantly more effective than saline vehicle in mild to moderate OA of the knee for the 13 week postinjection period of the study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Striving Towards a Common Language I outline an innovative methodology which consists of three strands encompassing an Indigenous-centred approach based on Indigenous Self-determination (participatory action research), relationship as central to socio-cultural dynamics, and feminist phenomenology. This methodology - which I call Living On the Ground was created in direct concert with 13 Indigenous women elders who were my hosts, teachers and walytja (family) as we worked together to create a dynamic cultural revitalisation project for their community, one of Australia's most remote Aboriginal settlements. I explain the processes I went through as a White Irish-Australian woman living with the women elders and their 11 dogs in a one room tin shed for two years, and tell of how the nexus of land, Ancestors, and the Tjukurrpa (Dreaming) combined with White cultural practices came to inspire a methodology which took the best from Indigenous and (White) feminist ways of knowing and of being. (c) 2005 Z. de Ishtar. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a parallel genetic algorithm for nding matrix multiplication algo-rithms. For 3 x 3 matrices our genetic algorithm successfully discovered algo-rithms requiring 23 multiplications, which are equivalent to the currently best known human-developed algorithms. We also studied the cases with less mul-tiplications and evaluated the suitability of the methods discovered. Although our evolutionary method did not reach the theoretical lower bound it led to an approximate solution for 22 multiplications.