631 resultados para algorithmic skeletons
Resumo:
An unaltered rearrangement of the original computation of a neural based predictor at the algorithmic level is introduced as a new organization. Its FPGA implementation generates circuits that are 1.7 faster than a direct implementation of the original algorithm. This faster clock rate allows to implement predictors with longer history lengths using the nearly the same hardware budget.
Resumo:
In this paper we analyse applicability and robustness of Markov chain Monte Carlo algorithms for eigenvalue problems. We restrict our consideration to real symmetric matrices. Almost Optimal Monte Carlo (MAO) algorithms for solving eigenvalue problems are formulated. Results for the structure of both - systematic and probability error are presented. It is shown that the values of both errors can be controlled independently by different algorithmic parameters. The results present how the systematic error depends on the matrix spectrum. The analysis of the probability error is presented. It shows that the close (in some sense) the matrix under consideration is to the stochastic matrix the smaller is this error. Sufficient conditions for constructing robust and interpolation Monte Carlo algorithms are obtained. For stochastic matrices an interpolation Monte Carlo algorithm is constructed. A number of numerical tests for large symmetric dense matrices are performed in order to study experimentally the dependence of the systematic error from the structure of matrix spectrum. We also study how the probability error depends on the balancing of the matrix. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
The diagnosis of thalassaemia in archaeological populations has long been hindered by a lack of pathogonomic features, and the non-specific nature of cribra orbitalia and porotic hyperostosis. In fact, clinical research has highlighted more specific diagnostic criteria for thalassaemia major and intermedia based on changes to the thorax (‘rib-within-a-rib’ and costal osteomas). A recent re-examination of 364 child skeletons from Romano-British Poundbury Camp, Dorset revealed children with general ‘wasting’ of the bones and three children who demonstrated a variety of severe lesions (e.g. zygomatic bone and rib hypertrophy, porotic hyperostosis, rib lesions, osteopenia and pitted diaphyseal shafts) that are inconsistent with dietary deficiency alone, and more consistent with a diagnosis of genetic anaemia. Two of these children displayed rib lesions typical of those seen in modern cases of thalassaemia. The children of Poundbury Camp represent the first cases of genetic anaemia identified in a British archaeological population. As thalassaemia is a condition strongly linked to Mediterranean communities, the presence of this condition in a child from England, found within a mausoleum, suggests that they were born to wealthy immigrant parents living in this small Roman settlement in Dorset. This paper explores the diagnostic criteria for genetic anaemia in the archaeological literature and what its presence in ancient populations can contribute to our knowledge of past human migration.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
In order to investigate how the population diversity at major Romano-British urban centres compared to small towns and military outposts, we conducted multi-isotope (carbon, nitrogen, oxygen and strontium) analyses of bones (42 individuals) and teeth (26 individuals) of human skeletons from Cataractonium/ Roman Catterick in North Yorkshire (U.K.). The results suggest a markedly less diverse population at Catterick than at the larger towns. Significant differences are observed between burials from the town and fort area and the suburb of Bainesse to the south, and it is suggested that these reflect a shift to more localised recruitment for the Roman army in the Late Roman period. Isotope data for the ‘Bainesse Eunuch’, an unusual 4th century burial that has been interpreted as the remains of a ‘transvestite’ priest of Cybele, are not ultimately conclusive but consistent with origins in Southern Britain or areas with a similar climate abroad. This paper also presents strontium isotope data for modern vegetation samples from 17 sites in the Catterick/northern Vale of York area which contribute to a continuing effort to map the biosphere 87Sr/86Sr variation in Britain.
Resumo:
Robustness in multi-variable control system design requires that the solution to the design problem be insensitive to perturbations in the system data. In this paper we discuss measures of robustness for generalized state-space, or descriptor, systems and describe algorithmic techniques for optimizing robustness for various applications.
Resumo:
This paper describes the implementation of a semantic web search engine on conversation styled transcripts. Our choice of data is Hansard, a publicly available conversation style transcript of parliamentary debates. The current search engine implementation on Hansard is limited to running search queries based on keywords or phrases hence lacks the ability to make semantic inferences from user queries. By making use of knowledge such as the relationship between members of parliament, constituencies, terms of office, as well as topics of debates the search results can be improved in terms of both relevance and coverage. Our contribution is not algorithmic instead we describe how we exploit a collection of external data sources, ontologies, semantic web vocabularies and named entity extraction in the analysis of underlying semantics of user queries as well as the semantic enrichment of the search index thereby improving the quality of results.
Resumo:
This study explores the disease experience of children buried within the cemetery of St. Oswald’s Priory, Gloucester from AD1153 to 1857. Evidence for ages-at-death, infant mortality, and the prevalence of stress indicators, trauma, and pathology were compared between the early and postmedieval periods. The skeletal remains of these children provide evidence for child health spanning the economic expansion of Gloucester at St. Oswald’s, from a mostly rural parish to a graveyard catering for families from the poorer northern part of the town and the workhouse. Results showed that the children from the postmedieval period in Gloucester suffered higher rates of dental caries (38%) and congenital conditions (17.3%) than their counterparts from the early and later medieval period. This paper serves to highlight the value of nonadult skeletal material in the interpretation of past human health in transitional societies and illustrates the wide variety of pathological conditions that can be observed in nonadult skeletons.
Resumo:
Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
Users’ requirements change drives an information system evolution. Consequently, such evolution affects those atomic services which provide functional operations from one state of their composition to another state of composition. A challenging issue associated with such evolution of the state of service composition is to ensure a resultant service composition remaining rational. This paper presents a method of Service Composition Atomic-Operation Set (SCAOS). SCAOS defines 2 classes of atomic operations and 13 kinds of basic service compositions to aid a state change process by using Workflow Net. The workflow net has algorithmic capabilities to compose the required services with rationality and maintain any changes to the services in a different composition also rational. This method can improve the adaptability to the ever changing business requirements of information systems in the dynamic environment.
Resumo:
Sea surface temperature (SST) datasets have been generated from satellite observations for the period 1991–2010, intended for use in climate science applications. Attributes of the datasets specifically relevant to climate applications are: first, independence from in situ observations; second, effort to ensure homogeneity and stability through the time-series; third, context-specific uncertainty estimates attached to each SST value; and, fourth, provision of estimates of both skin SST (the fundamental measure- ment, relevant to air-sea fluxes) and SST at standard depth and local time (partly model mediated, enabling comparison with his- torical in situ datasets). These attributes in part reflect requirements solicited from climate data users prior to and during the project. Datasets consisting of SSTs on satellite swaths are derived from the Along-Track Scanning Radiometers (ATSRs) and Advanced Very High Resolution Radiometers (AVHRRs). These are then used as sole SST inputs to a daily, spatially complete, analysis SST product, with a latitude-longitude resolution of 0.05°C and good discrimination of ocean surface thermal features. A product user guide is available, linking to reports describing the datasets’ algorithmic basis, validation results, format, uncer- tainty information and experimental use in trial climate applications. Future versions of the datasets will span at least 1982–2015, better addressing the need in many climate applications for stable records of global SST that are at least 30 years in length.
Resumo:
Historical, artefactual and place-name evidence indicates that Scandinavian migrants moved to eastern England in the ninth century AD, settling in the Danelaw. However, only a handful of characteristically Scandinavian burials have been found in the region. One, widely held, explanation is that most of these Scandinavian settlers quickly adopted local Christian burial customs, thus leaving Scandinavians indistinguishable from the Anglo-Saxon population. We undertook osteological and isotopic analysis to investigate the presence of first-generation Scandinavian migrants. Burials from Masham were typical of the later Anglo-Saxon period and included men, women and children. The location and positioning of the four adult burials from Coppergate, however, are unusual for Anglo-Scandinavian York. None of the skeletons revealed interpersonal violence. Isotopic evidence did not suggest a marine component in the diet of either group, but revealed migration on a regional, and possibly an international, scale. Combined strontium and oxygen isotope analysis should be used to investigate further both regional and Scandinavian migration in the later Anglo-Saxon period.