18 resultados para méthode level-set
Resumo:
Aim: To study the relation between visual impairment and ability to care for oneself or a dependant in older people with age related macular degeneration (AMD). Method: Cross sectional study of older people with visual impairment due to AMD in a specialised retinal service clinic. 199 subjects who underwent visual function assessment (fully corrected distance and near acuity and contrast sensitivity in both eyes), followed by completion of a package of questionnaires dealing with general health status (SF36), visual functioning (Daily Living Tasks Dependent on Vision, DLTV) and ability to care for self or provide care to others. The outcome measure was self reported ability to care for self and others. Three levels of self reported ability to care were identified—inability to care for self (level 1), ability to care for self but not others (level 2), and ability to care for self and others (level 3). Results: People who reported good general health status and visual functioning (that is, had high scores on SF36 and DLTV) were more likely to state that they were able to care for self and others. Similarly people with good vision in the better seeing eye were more likely to report ability to care for self and others. People with a distance visual acuity (DVA) worse than 0.4 logMAR (Snellen 6/15) had less than 50% probability of assigning themselves to care level 3 and those with DVA worse than 1.0 logMAR (Snellen 6/60) had a probability of greater than 50% or for assigning themselves to care level 1. Regression analyses with level of care as the dependent variable and demographic factors, DLTV subscales, and SF36 dimensions as the explanatory variables confirmed that the DLTV subscale 1 was the most important variable in the transition from care level 3 to care level 2. The regression analyses also confirmed that the DLTV subscale 2 was the most important in the transition from care level 3 to care level 1. Conclusions: Ability to care for self and dependants has a strong relation with self reported visual functioning and quality of life and is adversely influenced by visual impairment. The acuity at which the balance of probability shifts in the direction of diminished ability to care for self or others is lower than the level set by social care agencies for provision of support. These findings have implications for those involved with visual rehabilitation and for studies of the cost effectiveness of interventions in AMD.
Resumo:
Six veal calves were medicated with clenbuterol at 20 mu g kg bodyweightl day(-1) for 42 days before they were slaughtered, to evaluate the lesions and residues in target organs. Compared with six unmedicated calves the most noticeable changes were tracheal dilatation, decreased uterine weight, slight mucous hypersecretion in the uterus and vagina and depletion of liver glycogen. The highest concentrations of clenbuterol (62 to 128 ng/g(-1)) were recorded in the choroid/retina, and the aqueous humour had the lowest concentration (0.5 to 2.4 ng ml(-1)). The residue concentrations were higher than the maximum residue level set for clenbuterol (0.5 ng g(-1))
Resumo:
The explosion of sub-Chandrasekhar mass white dwarfs via the double detonation scenario is a potential explanation for type Ia supernovae. In this scenario, a surface detonation in a helium layer initiates a detonation in the underlying carbon/oxygen core leading to an explosion. For a given core mass, a lower bound has been determined on the mass of the helium shell required for dynamical burning during a helium flash, which is a necessary prerequisite for detonation. For a range of core and corresponding minimum helium shell masses, we investigate whether an assumed surface helium detonation is capable of triggering a subsequent detonation in the core even for this limiting case. We carried out hydrodynamic simulations on a co-expanding Eulerian grid in two dimensions assuming rotational symmetry. The detonations are propagated using the level-set approach and a simplified scheme for nuclear reactions that has been calibrated with a large nuclear network. The same network is used to determine detailed nucleosynthetic abundances in a post-processing step. Based on approximate detonation initiation criteria in the literature, we find that secondary core detonations are triggered for all of the simulated models, ranging in core mass from 0.810 up to 1.385 M? with corresponding shell masses from 0.126 down to 0.0035 M?. This implies that, as soon as a detonation triggers in a helium shell covering a carbon/oxygen white dwarf, a subsequent core detonation is virtually inevitable.
Resumo:
The article investigates the relationships between technological regimes and firm-level productivity performance, and it explores how such a relationship differs in different Schumpeterian patterns of innovation. The analysis makes use of a rich dataset containing data on innovation and other economic characteristics of a large representative sample of Norwegian firms in manufacturing and service industries for the period 1998–2004. First, we decompose TFP growth into technical progress and efficiency changes by means of data envelopment analysis. We then estimate an empirical model that relates these two productivity components to the characteristics of technological regimes and a set of other firm-specific factors. The results indicate that: (i) TFP growth has mainly been achieved through technical progress, while technical efficiency has on average decreased; (ii) the characteristics of technological regimes are important determinants of firm-level productivity growth, but their impacts on technical progress are different from the effects on efficiency change; (iii) the estimated model works differently in the two Schumpeterian regimes. Technical progress has been more dynamic in Schumpeter Mark II industries, while efficiency change has been more important in Schumpeter Mark I markets.
Resumo:
Bit level systolic array structures for computing sums of products are studied in detail. It is shown that these can be sub-divided into two classes and that, within each class, architectures can be described in terms of a set of constraint equations. It is further demonstrated that high performance system level functions with attractive VLSI properties can be constructed by matching data flow geometries in bit level and word level architectures.
Resumo:
Bit-level systolic-array structures for computing sums of products are studied in detail. It is shown that these can be subdivided into two classes and that within each class architectures can be described in terms of a set of constraint equations. It is further demonstrated that high-performance system-level functions with attractive VLSI properties can be constructed by matching data-flow geometries in bit-level and word-level architectures.
Resumo:
The use of bit-level systolic array circuits as building blocks in the construction of larger word-level systolic systems is investigated. It is shown that the overall structure and detailed timing of such systems may be derived quite simply using the dependence graph and cut-set procedure developed by S. Y. Kung (1988). This provides an attractive and intuitive approach to the bit-level design of many VLSI signal processing components. The technique can be applied to ripple-through and partly pipelined circuits as well as fully systolic designs. It therefore provides a means of examining the relative tradeoff between levels of pipelining, chip area, power consumption, and throughput rate within a given VLSI design.
Resumo:
We present BDDT, a task-parallel runtime system that dynamically discovers and resolves dependencies among parallel tasks. BDDT allows the programmer to specify detailed task footprints on any memory address range, multidimensional array tile or dynamic region. BDDT uses a block-based dependence analysis with arbitrary granularity. The analysis is applicable to existing C programs without having to restructure object or array allocation, and provides flexibility in array layouts and tile dimensions.
We evaluate BDDT using a representative set of benchmarks, and we compare it to SMPSs (the equivalent runtime system in StarSs) and OpenMP. BDDT performs comparable to or better than SMPSs and is able to cope with task granularity as much as one order of magnitude finer than SMPSs. Compared to OpenMP, BDDT performs up to 3.9× better for benchmarks that benefit from dynamic dependence analysis. BDDT provides additional data annotations to bypass dependence analysis. Using these annotations, BDDT outperforms OpenMP also in benchmarks where dependence analysis does not discover additional parallelism, thanks to a more efficient implementation of the runtime system.
Resumo:
Purpose: The purpose of this paper is to present an artificial neural network (ANN) model that predicts earthmoving trucks condition level using simple predictors; the model’s performance is compared to the respective predictive accuracy of the statistical method of discriminant analysis (DA).
Design/methodology/approach: An ANN-based predictive model is developed. The condition level predictors selected are the capacity, age, kilometers travelled and maintenance level. The relevant data set was provided by two Greek construction companies and includes the characteristics of 126 earthmoving trucks.
Findings: Data processing identifies a particularly strong connection of kilometers travelled and maintenance level with the earthmoving trucks condition level. Moreover, the validation process reveals that the predictive efficiency of the proposed ANN model is very high. Similar findings emerge from the application of DA to the same data set using the same predictors.
Originality/value: Earthmoving trucks’ sound condition level prediction reduces downtime and its adverse impact on earthmoving duration and cost, while also enhancing the maintenance and replacement policies effectiveness. This research proves that a sound condition level prediction for earthmoving trucks is achievable through the utilization of easy to collect data and provides a comparative evaluation of the results of two widely applied predictive methods.
Resumo:
The A-level Mathematics qualification is based on a compulsory set of pure maths modules and a selection of applied maths modules. The flexibility in choice of applied modules has led to concerns that many students would proceed to study engineering at university with little background in mechanics. A survey of aerospace and mechanical engineering students in our university revealed that a combination of mechanics and statistics (the basic module in both) was by far the most popular choice of optional modules in A-level Mathematics, meaning that only about one-quarter of the class had studied mechanics beyond the basic module within school mathematics. Investigation of student performance in two core, first-year engineering courses, which build on a mechanics foundation, indicated that any benefits for students who studied the extra mechanics at school were small. These results give concern about the depth of understanding in mechanics gained during A-level Mathematics.
Resumo:
This paper introduces hybrid address spaces as a fundamental design methodology for implementing scalable runtime systems on many-core architectures without hardware support for cache coherence. We use hybrid address spaces for an implementation of MapReduce, a programming model for large-scale data processing, and the implementation of a remote memory access (RMA) model. Both implementations are available on the Intel SCC and are portable to similar architectures. We present the design and implementation of HyMR, a MapReduce runtime system whereby different stages and the synchronization operations between them alternate between a distributed memory address space and a shared memory address space, to improve performance and scalability. We compare HyMR to a reference implementation and we find that HyMR improves performance by a factor of 1.71× over a set of representative MapReduce benchmarks. We also compare HyMR with Phoenix++, a state-of-art implementation for systems with hardware-managed cache coherence in terms of scalability and sustained to peak data processing bandwidth, where HyMR demon- strates improvements of a factor of 3.1× and 3.2× respectively. We further evaluate our hybrid remote memory access (HyRMA) programming model and assess its performance to be superior of that of message passing.
Resumo:
A novel approach for the multi-objective design optimisation of aerofoil profiles is presented. The proposed method aims to exploit the relative strengths of global and local optimisation algorithms, whilst using surrogate models to limit the number of computationally expensive CFD simulations required. The local search stage utilises a re-parameterisation scheme that increases the flexibility of the geometry description by iteratively increasing the number of design variables, enabling superior designs to be generated with minimal user intervention. Capability of the algorithm is demonstrated via the conceptual design of aerofoil sections for use on a lightweight laminar flow business jet. The design case is formulated to account for take-off performance while reducing sensitivity to leading edge contamination. The algorithm successfully manipulates boundary layer transition location to provide a potential set of aerofoils that represent the trade-offs between drag at cruise and climb conditions in the presence of a challenging constraint set. Variations in the underlying flow physics between Pareto-optimal aerofoils are examined to aid understanding of the mechanisms that drive the trade-offs in objective functions.
Resumo:
The A-level Mathematics qualification is based on a compulsory set of pure maths modules and a selection of applied maths modules with the pure maths representing two thirds of the assessment. The applied maths section includes mechanics, statistics and (sometimes) decision maths. A combination of mechanics and statistics tends to be the most popular choice by far. The current study aims to understand how maths teachers in secondary education make decisions regarding the curriculum options and offers useful insight to those currently designing the new A-level specifications.
Semi-structured interviews were conducted with A-level maths teachers representing 27 grammar schools across Northern Ireland. Teachers were generally in agreement regarding the importance of pure maths and the balance between pure and applied within the A-level maths curriculum. A wide variety of opinions existed concerning the applied options. While many believe that the basic mechanics-statistics (M1-S1) combination is most accessible, it was also noted that the M1-M2 combination fits neatly alongside A-level physics. Lack of resources, timetabling constraints and competition with other subjects in the curriculum hinder uptake of A-level Further Maths.
Teachers are very conscious of the need to obtain high grades to benefit both their pupils and the school’s reputation. The move to a linear assessment system in England while Northern Ireland retains the modular system is likely to cause some schools to review their choice of exam board although there is disagreement as to whether a modular or linear system is more advantageous for pupils. The upcoming change in the specification offers an opportunity to refresh the assessment also and reduce the number of leading questions. However, teachers note that there are serious issues with GCSE maths and these have implications for A-level.
Resumo:
Relative sea-level rise has been a major factor driving the evolution of reef systems during the Holocene. Most models of reef evolution suggest that reefs preferentially grow vertically during rising sea level then laterally from windward to leeward, once the reef flat reaches sea level. Continuous lagoonal sedimentation ("bucket fill") and sand apron progradation eventually lead to reef systems with totally filled lagoons. Lagoonal infilling of One Tree Reef (southern Great Barrier Reef) through sand apron accretion was examined in the context of late Holocene relative sea-level change. This analysis was conducted using sedimentological and digital terrain data supported by 50 radiocarbon ages from fossil microatolls, buried patch reefs, foraminifera and shells in sediment cores, and recalibrated previously published radiocarbon ages. This data set challenges the conceptual model of geologically continuous sediment infill during the Holocene through sand apron accretion. Rapid sand apron accretion occurred between 6000 and 3000 calibrated yr before present B.P. (cal. yr B.P.); followed by only small amounts of sedimentation between 3000 cal. yr B.P. and present, with no significant sand apron accretion in the past 2 k.y. This hiatus in sediment infill coincides with a sea-level fall of similar to 1-1.3 m during the late Holocene (ca. 2000 cal. yr B.P.), which would have caused the turn-off of highly productive live coral growth on the reef flats currently dominated by less productive rubble and algal flats, resulting in a reduced sediment input to back-reef environments and the cessation in sand apron accretion. Given that relative sea-level variations of similar to 1 m were common throughout the Holocene, we suggest that this mode of sand apron development and carbonate production is applicable to most reef systems.