12 resultados para sequential design


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Despite the popularity of the Theory of Planned Behaviour (TPB) a lack of research assessing the efficacy of the model in understanding the health behaviour of children exists, with those studies that have been conducted reporting problems with questionnaire formulation and low to moderate internal consistencies for TPB constructs. The aim of this study was to develop and test a TPB-based measure suitable for use with primary school children aged 9 to 10 years. A mixed method sequential design was employed. In Stage 1, 7 semi-structured focus group discussions (N=56) were conducted to elicit the underlying beliefs specific to tooth brushing. Using content thematic analysis the beliefs were identified and a TPB measure was developed. A repeated measures design was employed in Stage 2 using test re-test reliability analysis in order to assess its psychometric properties. In all, 184 children completed the questionnaire. Test-retest reliabilities support the validity and reliability of the TPB measure for assessing the tooth brushing beliefs of children. Pearson’s product moment correlations were calculated for all of the TPB beliefs, achieving substantial to almost perfect agreement levels. Specifically, a significant relationship between all 10 of the direct and indirect TPB constructs at the 0.01 level was achieved. This paper will discuss the design and development of the measure so could serve as a guide to fellow researchers and health psychologists interested in using theoretical models to investigate the health and well-being of children.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ecological coherence is a multifaceted conservation objective that includes some potentially conflicting concepts. These concepts include the extent to which the network maximises diversity (including genetic diversity) and the extent to which protected areas interact with non-reserve locations. To examine the consequences of different selection criteria, the preferred location to complement protected sites was examined using samples taken from four locations around each of two marine protected areas: Strangford Lough and Lough Hyne, Ireland. Three different measures of genetic distance were used: FST, Dest and a measure of allelic dissimilarity, along with a direct assessment of the total number of alleles in different candidate networks. Standardized site scores were used for comparisons across methods and selection criteria. The average score for Castlehaven, a site relatively close to Lough Hyne, was highest, implying that this site would capture the most genetic diversity while ensuring highest degree of interaction between protected and unprotected sites. Patterns around Strangford Lough were more ambiguous, potentially reflecting the weaker genetic structure around this protected area in comparison to Lough Hyne. Similar patterns were found across species with different dispersal capacities, indicating that methods based on genetic distance could be used to help maximise ecological coherence in reserve networks. ⺠Ecological coherence is a key component of marine protected area network design. ⺠Coherence contains a number of competing concepts. ⺠Genetic information from field populations can help guide assessments of coherence. ⺠Average choice across different concepts of coherence was consistent among species. ⺠Measures can be combined to compare the coherence of different network designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following a thorough site investigation, a biological Sequential Reactive Barrier (SEREBAR), designed to remove Polycyclic Aromatic Hydrocarbons (PAHs) and BTEX compounds, was installed at a Former Manufactured Gas Plant (FMGP) site. The novel design of the barrier comprises, in series, an interceptor and six reactive chambers. The first four chambers (2 nonaerated-2 aerated) were filled with sand to encourage microbial colonization. Sorbant Granular Activated Carbon (GAC) was present in the final two chambers in order to remove any recalcitrant compounds. The SEREBAR has been in continuous operation for 2 years at different operational flow rates (ranging from 320 L/d to 4000 L/d, with corresponding residence times in each chamber of 19 days and 1.5 days, respectively). Under low flow rate conditions (320-520 L/d) the majority of contaminant removal (>93%) occurred biotically within the interceptor and the aerated chambers. Under high flow rates (1000-4000 L/d) and following the installation of a new interceptor to prevent passive aeration, the majority of contaminant removal (>80%) again occurred biotically within the aerated chambers. The sorption zone (GAC) proved to be an effective polishing step, removing any remaining contaminants to acceptable concentrations before discharge down-gradient of the SEREBAR (overall removals >95%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.

This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Support vector machines (SVMs), though accurate, are not preferred in applications requiring high classification speed or when deployed in systems of limited computational resources, due to the large number of support vectors involved in the model. To overcome this problem we have devised a primal SVM method with the following properties: (1) it solves for the SVM representation without the need to invoke the representer theorem, (2) forward and backward selections are combined to approach the final globally optimal solution, and (3) a criterion is introduced for identification of support vectors leading to a much reduced support vector set. In addition to introducing this method the paper analyzes the complexity of the algorithm and presents test results on three public benchmark problems and a human activity recognition application. These applications demonstrate the effectiveness and efficiency of the proposed algorithm.


--------------------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The C-element logic gate is a key component for constructing asynchronous control in silicon integrated circuits. The purpose of this reported work is to introduce a new speed-independent C-element design, which is synthesised by the asynchronous Petrify design tool to ensure it is composed of sequential digital latches rather than complex gates. The benefits are that it guarantees correct speed-independent operation, together with easy integration in modern design flows and processes. It is compared to an equivalent speed-independent complex gate C-element design generated by Petrify in a 130 nm semiconductor process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waste management and sustainability are two core underlying philosophies that the construction sector must acknowledge and implement; however, this can prove difficult and time consuming. To this end, the aim of this paper is to examine waste management strategies and the possible benefits, advantages and disadvantages to their introduction and use, while also to examine any inter-relationship with sustainability, particularly at the design stage. The purpose of this paper is to gather, examine and review published works and investigate factors which influence economic decisions at the design phase of a construction project. In addressing this aim, a three tiered sequential research approach is adopted; in-depth literature review, interviews/focus groups and qualitative analysis. The resulting data is analyzed, discussed, with potential conclusions identified; paying particular attention to implications for practice within architectural firms. This research is of importance, particularly to the architectural sector, as it can add to the industry’s understanding of the design process, while also considering the application and integration of waste management into the design procedure. Results indicate that the researched topic had many advantages but also had inherent disadvantages. It was found that the potential advantages outweighed disadvantages, but uptake within industry was still slow and that better promotion and their benefits to; sustainability, the environment, society and the industry were required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simple meso-scale capacitor structures have been made by incorporating thin (300 nm) single crystal lamellae of KTiOPO4 (KTP) between two coplanar Pt electrodes. The influence that either patterned protrusions in the electrodes or focused ion beam milled holes in the KTP have on the nucleation of reverse domains during switching was mapped using piezoresponse force microscopy imaging. The objective was to assess whether or not variations in the magnitude of field enhancement at localised “hot-spots,” caused by such patterning, could be used to both control the exact locations and bias voltages at which nucleation events occurred. It was found that both the patterning of electrodes and the milling of various hole geometries into the KTP could allow controlled sequential injection of domain wall pairs at different bias voltages; this capability could have implications for the design and operation of domain wall electronic devices, such as memristors, in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the highly competitive world of modern finance, new derivatives are continually required to take advantage of changes in financial markets, and to hedge businesses against new risks. The research described in this paper aims to accelerate the development and pricing of new derivatives in two different ways. Firstly, new derivatives can be specified mathematically within a general framework, enabling new mathematical formulae to be specified rather than just new parameter settings. This Generic Pricing Engine (GPE) is expressively powerful enough to specify a wide range of stand¬ard pricing engines. Secondly, the associated price simulation using the Monte Carlo method is accelerated using GPU or multicore hardware. The parallel implementation (in OpenCL) is automatically derived from the mathematical description of the derivative. As a test, for a Basket Option Pricing Engine (BOPE) generated using the GPE, on the largest problem size, an NVidia GPU runs the generated pricing engine at 45 times the speed of a sequential, specific hand-coded implementation of the same BOPE. Thus a user can more rapidly devise, simulate and experiment with new derivatives without actual programming.