910 resultados para Dense Set
Resumo:
We analyze the sequences of round-off errors of the orbits of a discretized planar rotation, from a probabilistic angle. It was shown [Bosio & Vivaldi, 2000] that for a dense set of parameters, the discretized map can be embedded into an expanding p-adic dynamical system, which serves as a source of deterministic randomness. For each parameter value, these systems can generate infinitely many distinct pseudo-random sequences over a finite alphabet, whose average period is conjectured to grow exponentially with the bit-length of the initial condition (the seed). We study some properties of these symbolic sequences, deriving a central limit theorem for the deviations between round-off and exact orbits, and obtain bounds concerning repetitions of words. We also explore some asymptotic problems computationally, verifying, among other things, that the occurrence of words of a given length is consistent with that of an abstract Bernoulli sequence.
Resumo:
The main purpose of a gene interaction network is to map the relationships of the genes that are out of sight when a genomic study is tackled. DNA microarrays allow the measure of gene expression of thousands of genes at the same time. These data constitute the numeric seed for the induction of the gene networks. In this paper, we propose a new approach to build gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling. The interactions induced by the Bayesian classifiers are based both on the expression levels and on the phenotype information of the supervised variable. Feature selection and bootstrap resampling add reliability and robustness to the overall process removing the false positive findings. The consensus among all the induced models produces a hierarchy of dependences and, thus, of variables. Biologists can define the depth level of the model hierarchy so the set of interactions and genes involved can vary from a sparse to a dense set. Experimental results show how these networks perform well on classification tasks. The biological validation matches previous biological findings and opens new hypothesis for future studies
Resumo:
Six independent studies have identified linkage to chromosome 18 for developmental dyslexia or general reading ability. Until now, no candidate genes have been identified to explain this linkage. Here, we set out to identify the gene(s) conferring susceptibility by a two stage strategy of linkage and association analysis. Methodology/Principal Findings: Linkage analysis: 264 UK families and 155 US families each containing at least one child diagnosed with dyslexia were genotyped with a dense set of microsatellite markers on chromosome 18. Association analysis: Using a discovery sample of 187 UK families, nearly 3000 SNPs were genotyped across the chromosome 18 dyslexia susceptibility candidate region. Following association analysis, the top ranking SNPs were then genotyped in the remaining samples. The linkage analysis revealed a broad signal that spans approximately 40 Mb from 18p11.2 to 18q12.2. Following the association analysis and subsequent replication attempts, we observed consistent association with the same SNPs in three genes; melanocortin 5 receptor (MC5R), dymeclin (DYM) and neural precursor cell expressed, developmentally down-regulated 4-like (NEDD4L). Conclusions: Along with already published biological evidence, MC5R, DYM and NEDD4L make attractive candidates for dyslexia susceptibility genes. However, further replication and functional studies are still required.
Resumo:
We show that a set of fundamental solutions to the parabolic heat equation, with each element in the set corresponding to a point source located on a given surface with the number of source points being dense on this surface, constitute a linearly independent and dense set with respect to the standard inner product of square integrable functions, both on lateral- and time-boundaries. This result leads naturally to a method of numerically approximating solutions to the parabolic heat equation denoted a method of fundamental solutions (MFS). A discussion around convergence of such an approximation is included.
Resumo:
Let U be an open subset of a separable Banach space. Let F be the collection of all holomorphic mappings f from the open unit disc D � C into U such that f(D) is dense in U. We prove the lineability and density of F in appropriate spaces for diferent choices of U. RESUMEN. Sea U un subconjunto abierto de un espacio de Banach separable. Sea F el conjunto de funciones holomorfas f definidas en el disco unidad D del plano complejo con valores en U tales que f(D) es denso en U. En el artículo se demuestra la lineabilidad y densidad del conjunto F para diferentes elecciones de U.
Resumo:
∗ The first and third author were partially supported by National Fund for Scientific Research at the Bulgarian Ministry of Science and Education under grant MM-701/97.
Resumo:
Loss of magnetic medium solids from dense medium circuits is a substantial contributor to operating cost. Much of this loss is by way of wet drum magnetic separator effluent. A model of the separator would be useful for process design, optimisation and control. A review of the literature established that although various rules of thumb exist, largely based on empirical or anecdotal evidence, there is no model of magnetics recovery in a wet drum magnetic separator which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was therefore carried out using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 mm diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in the work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. Observations carried out as an adjunct to this work, as well as magnetic theory, suggests that the capture of magnetic particles in the wet drum magnetic separator is by a flocculation process. Such a process should be defined by a flocculation rate and a flocculation time; the latter being defined by the volumetric flowrate and the volume within the separation zone. A model based on this concept and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to provide a satisfactory fit to the data over three orders of magnitude of magnetics loss. (C) 2003 Elsevier Science BY. All rights reserved.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
To assist cattle producers transition from microsatellite (MS) to single nucleotide polymorphism (SNP) genotyping for parental verification we previously devised an effective and inexpensive method to impute MS alleles from SNP haplotypes. While the reported method was verified with only a limited data set (N = 479) from Brown Swiss, Guernsey, Holstein, and Jersey cattle, some of the MS-SNP haplotype associations were concordant across these phylogenetically diverse breeds. This implied that some haplotypes predate modern breed formation and remain in strong linkage disequilibrium. To expand the utility of MS allele imputation across breeds, MS and SNP data from more than 8000 animals representing 39 breeds (Bos taurus and B. indicus) were used to predict 9410 SNP haplotypes, incorporating an average of 73 SNPs per haplotype, for which alleles from 12 MS markers could be accurately be imputed. Approximately 25% of the MS-SNP haplotypes were present in multiple breeds (N = 2 to 36 breeds). These shared haplotypes allowed for MS imputation in breeds that were not represented in the reference population with only a small increase in Mendelian inheritance inconsistancies. Our reported reference haplotypes can be used for any cattle breed and the reported methods can be applied to any species to aid the transition from MS to SNP genetic markers. While ~91% of the animals with imputed alleles for 12 MS markers had ≤1 Mendelian inheritance conflicts with their parents' reported MS genotypes, this figure was 96% for our reference animals, indicating potential errors in the reported MS genotypes. The workflow we suggest autocorrects for genotyping errors and rare haplotypes, by MS genotyping animals whose imputed MS alleles fail parentage verification, and then incorporating those animals into the reference dataset.
Resumo:
[EN] We present an energy based approach to estimate a dense disparity map from a set of two weakly calibrated stereoscopic images while preserving its discontinuities resulting from image boundaries. We first derive a simplified expression for the disparity that allows us to estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the Nagel-Enkelmann operator. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method The resulting parabolic problem has a unique solution. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scalespace. Experimental results on both synthetic and real images arere presented to illustrate the capabilities of this PDE and scale-space based method.
Resumo:
Die Beziehung zwischen genetischem Polymorphismus von Populationen und Umweltvariabilität: Anwendung der Fitness-Set Theorie Das Quantitative Fitness-Set Modell (QFM) ist eine Erweiterung der Fitness-Set Theorie. Das QFM kann Abstufungen zwischen grob- und feinkörnigen regelmäßigen Schwankungen zweier Umwelten darstellen. Umwelt- und artspezifische Parameter, sowie die bewirkte Körnigkeit, sind quantifizierbar. Experimentelle Daten lassen sich analysieren und das QFM erweist sich in großen Populationen als sehr genau, was durch den diskreten Parameterraum unterstützt wird. Kleine Populationen und/oder hohe genetische Diversität führen zu Schätzungsungenauigkeiten, die auch in natürlichen Populationen zu erwarten sind. Ein populationsgrößenabhängiger Unschärfewert erweitert die Punktschätzung eines Parametersatzes zur Intervallschätzung. Diese Intervalle wirken in finiten Populationen als Fitnessbänder. Daraus ergibt sich die Hypothese, dass bei Arten, die in dichten kontinuierlichen Fitnessbändern leben, Generalisten und in diskreten Fitnessbändern Spezialisten evolvieren.Asynchrone Reproduktionsstrategien führen zur Bewahrung genetischer Diversität. Aus dem Wechsel von grobkörniger zu feinkörniger Umweltvariation ergibt sich eine Bevorzugung der spezialisierten Genotypen. Aus diesem Angriffspunkt für disruptive Selektion lässt sich die Hypothese Artbildung in Übergangsszenarien von grobkörniger zu feinkörniger Umweltvariation formulieren. Im umgekehrten Fall ist Diversitätsverlust und stabilisierende Selektion zu erwarten Dies ist somit eine prozessorientierte Erklärung für den Artenreichtum der (feinkörnigen) Tropen im Vergleich zu den artenärmeren, jahreszeitlichen Schwankungen unterworfenen (grobkörnigen) temperaten Zonen.
Resumo:
Formation and discharge of dense-core secretory vesicles depend on controlled rearrangement of the core proteins during their assembly and dispersal. The ciliate Tetrahymena thermophila offers a simple system in which the mechanisms may be studied. Here we show that most of the core consists of a set of polypeptides derived proteolytically from five precursors. These share little overall amino acid identity but are nonetheless predicted to have structural similarity. In addition, sites of proteolytic processing are notably conserved and suggest that specific endoproteases as well as carboxypeptidase are involved in core maturation. In vitro binding studies and sequence analysis suggest that the polypeptides bind calcium in vivo. Core assembly and postexocytic dispersal are compartment-specific events. Two likely regulatory factors are proteolytic processing and exposure to calcium. We asked whether these might directly influence the conformations of core proteins. Results using an in vitro chymotrypsin accessibility assay suggest that these factors can induce sequential structural rearrangements. Such progressive changes in polypeptide folding may underlie the mechanisms of assembly and of rapid postexocytic release. The parallels between dense-core vesicles in different systems suggest that similar mechanisms are widespread in this class of organelles.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
2000 Mathematics Subject Classification: 49J52, 49J50, 58C20, 26B09.
Resumo:
Due to relative ground movement, buried pipelines experience geotechnical loads. The imposed geotechnical loads may initiate pipeline deformations that affect system serviceability and integrity. Engineering guidelines (e.g., ALA, 2005; Honegger and Nyman, 2001) provide the technical framework to develop idealized structural models to analyze pipe‒soil interaction events and assess pipe mechanical response. The soil behavior is modeled using discrete springs that represent the geotechnical loads per unit pipe length developed during the interaction event. Soil forces are defined along three orthogonal directions (i.e., axial, lateral and vertical) to analyze the response of pipelines. Nonlinear load-displacement relationships of soil defined by a spring, is independent of neighboring spring elements. However, recent experimental and numerical studies demonstrate significant coupling effects during oblique (i.e., not along one of the orthogonal axes) pipe‒soil interaction events. In the present study, physical modeling using a geotechnical centrifuge was conducted to improve the current understanding of soil load coupling effects of buried pipes in loose and dense sand. A section of pipeline, at shallow burial depth, was translated through the soil at different oblique angles in the axial-lateral plane. The force exerted by the soil on pipe is critically examined to assess the significance of load coupling effects and establish a yield envelope. The displacements required to soil yield force are also examined to assess potential coupling in mobilization distance. A set of laboratory tests were conducted on the sand used for centrifuge modeling to find the stress-strain behavior of sand, which was used to examine the possible mechanisms of centrifuge model test. The yield envelope, deformation patterns, and interpreted failure mechanisms obtained from centrifuge modeling are compared with other physical modeling and numerical simulations available in the literature.