149 resultados para Gradient descent algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Potentially inappropriate prescribing in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. In Ireland, 36% of those aged 70 years or over received at least one potentially inappropriate medication, with an associated expenditure of over €45 million.The main objective of this study is to determine the effectiveness and acceptability of a complex, multifaceted intervention in reducing the level of potentially inappropriate prescribing in primary care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biotic communities in Antarctic terrestrial ecosystems are relatively simple and often lack higher trophic levels (e. g. predators); thus, it is often assumed that species' distributions are mainly affected by abiotic factors such as climatic conditions, which change with increasing latitude, altitude and/or distance from the coast. However, it is becoming increasingly apparent that factors other than geographical gradients affect the distribution of organisms with low dispersal capability such as the terrestrial arthropods. In Victoria Land (East Antarctica) the distribution of springtail (Collembola) and mite (Acari) species vary at scales that range from a few square centimetres to regional and continental. Different species show different scales of variation that relate to factors such as local geological and glaciological history, and biotic interactions, but only weakly with latitudinal/altitudinal gradients. Here, we review the relevant literature and outline more appropriate sampling designs as well as suitable modelling techniques (e. g. linear mixed models and eigenvector mapping), that will more adequately address and identify the range of factors responsible for the distribution of terrestrial arthropods in Antarctica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bioavailability of soil arsenic (As) is determined by its speciation in soil solution, i.e., arsenite [As(III)] or arsenate [As(V)]. Soil bioavailability studies require suitable methods to cope with small volumes of soil solution that can be speciated directly after sampling, and thereby minimise any As speciation change during sample collection. In this study, we tested a self-made microcartridge to separate both As species and compared it to a commercially available cartridge. In addition, the diffusive gradient in thin films technique (DGT), in combination with the microcartridges, was applied to synthetic solutions and to a soil spiked with As. This combination was used to improve the assessment of available inorganic As species with ferrihydrite(FH)-DGT, in order to validate the technique for environmental analysis, mainly in soils. The self-made microcartridge was effective in separating As(III) from As(V) in solution with detection by inductively coupled plasma optical emission spectrometry (ICP-OES) in volumes of only 3 ml. The DGT study also showed that the FH-based binding gels are effective for As(III) and As(V) assessment, in solutions with As and P concentrations and ionic strength commonly found in soils. The FH-DGT was tested on flooded and unflooded As spiked soils and recoveries of As(III) and As(V) were 85–104% of the total dissolved As. This study shows that the DGT with FH-based binding gel is robust for assessing inorganic species of As in soils.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.

Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.

Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many modern networks are \emph{reconfigurable}, in the sense that the topology of the network can be changed by the nodes in the network. For example, peer-to-peer, wireless and ad-hoc networks are reconfigurable. More generally, many social networks, such as a company's organizational chart; infrastructure networks, such as an airline's transportation network; and biological networks, such as the human brain, are also reconfigurable. Modern reconfigurable networks have a complexity unprecedented in the history of engineering, resembling more a dynamic and evolving living animal rather than a structure of steel designed from a blueprint. Unfortunately, our mathematical and algorithmic tools have not yet developed enough to handle this complexity and fully exploit the flexibility of these networks. We believe that it is no longer possible to build networks that are scalable and never have node failures. Instead, these networks should be able to admit small, and maybe, periodic failures and still recover like skin heals from a cut. This process, where the network can recover itself by maintaining key invariants in response to attack by a powerful adversary is what we call \emph{self-healing}. Here, we present several fast and provably good distributed algorithms for self-healing in reconfigurable dynamic networks. Each of these algorithms have different properties, a different set of gaurantees and limitations. We also discuss future directions and theoretical questions we would like to answer. %in the final dissertation that this document is proposed to lead to.