942 resultados para topological string
Resumo:
We present an application of birth-and-death processes on configuration spaces to a generalized mutation4 selection balance model. The model describes the aging of population as a process of accumulation of mu5 tations in a genotype. A rigorous treatment demands that mutations correspond to points in abstract spaces. 6 Our model describes an infinite-population, infinite-sites model in continuum. The dynamical equation which 7 describes the system, is of Kimura-Maruyama type. The problem can be posed in terms of evolution of states 8 (differential equation) or, equivalently, represented in terms of Feynman-Kac formula. The questions of interest 9 are the existence of a solution, its asymptotic behavior, and properties of the limiting state. In the non-epistatic 10 case the problem was posed and solved in [Steinsaltz D., Evans S.N., Wachter K.W., Adv. Appl. Math., 2005, 11 35(1)]. In our model we consider a topological space X as the space of positions of mutations and the influence of epistatic potentials
Resumo:
The technique of linear responsibility analysis is used for a retrospective case study of a private industrial development consisting of an engineering factory and offices. A multi-disciplinary professional practice was used to manage and design the project. The organizational structure adopted on the project is analysed using concepts from systems theory which are included in Walker's theoretical model of the structure of building project organizations (Walker, 1981). This model proposes that the process of buildings provision can be viewed as systems and sub-systems which are differentiated form each other at decision points. Further to this, the sub-systematic analysis of the relationship between the contributors gives a quantitative assessment of the efficiency of the organizational structure used. There was a high level of satisfaction with the completed project and this is reflected by the way in which the organization structure corresponded to the model's proposition. However, the project was subject to string environmental forces which the project organization was not capable of entirely overcoming.
Resumo:
Visual exploration of scientific data in life science area is a growing research field due to the large amount of available data. The Kohonen’s Self Organizing Map (SOM) is a widely used tool for visualization of multidimensional data. In this paper we present a fast learning algorithm for SOMs that uses a simulated annealing method to adapt the learning parameters. The algorithm has been adopted in a data analysis framework for the generation of similarity maps. Such maps provide an effective tool for the visual exploration of large and multi-dimensional input spaces. The approach has been applied to data generated during the High Throughput Screening of molecular compounds; the generated maps allow a visual exploration of molecules with similar topological properties. The experimental analysis on real world data from the National Cancer Institute shows the speed up of the proposed SOM training process in comparison to a traditional approach. The resulting visual landscape groups molecules with similar chemical properties in densely connected regions.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
This paper reports three experiments that examine the role of similarity processing in McGeorge and Burton's (1990) incidental learning task. In the experiments subjects performed a distractor task involving four-digit number strings, all of which conformed to a simple hidden rule. They were then given a forced-choice memory test in which they were presented with pairs of strings and were led to believe that one string of each pair had appeared in the prior learning phase. Although this was not the case, one string of each pair did conform to the hidden rule. Experiment 1 showed that, as in the McGeorge and Burton study, subjects were significantly more likely to select test strings that conformed to the hidden rule. However, additional analyses suggested that rather than having implicitly abstracted the rule, subjects may have been selecting strings that were in some way similar to those seen during the learning phase. Experiments 2 and 3 were designed to try to separate out effects due to similarity from those due to implicit rule abstraction. It was found that the results were more consistent with a similarity-based model than implicit rule abstraction per se.
Resumo:
Accuracy and mesh generation are key issues for the high-resolution hydrodynamic modelling of the whole Great Barrier Reef. Our objective is to generate suitable unstructured grids that can resolve topological and dynamical features like tidal jets and recirculation eddies in the wake of islands. A new strategy is suggested to refine the mesh in areas of interest taking into account the bathymetric field and an approximated distance to islands and reefs. Such a distance is obtained by solving an elliptic differential operator, with specific boundary conditions. Meshes produced illustrate both the validity and the efficiency of the adaptive strategy. Selection of refinement and geometrical parameters is discussed. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Actual energy paths of long, extratropical baroclinic Rossby waves in the ocean are difficult to describe simply because they depend on the meridional-wavenumber-to-zonal-wavenumber ratio tau, a quantity that is difficult to estimate both observationally and theoretically. This paper shows, however, that this dependence is actually weak over any interval in which the zonal phase speed varies approximately linearly with tau, in which case the propagation becomes quasi-nondispersive (QND) and describable at leading order in terms of environmental conditions (i.e., topography and stratification) alone. As an example, the purely topographic case is shown to possess three main kinds of QND ray paths. The first is a topographic regime in which the rays follow approximately the contours f/h(alpha c) = a constant (alpha(c) is a near constant fixed by the strength of the stratification, f is the Coriolis parameter, and h is the ocean depth). The second and third are, respectively, "fast" and "slow" westward regimes little affected by topography and associated with the first and second bottom-pressure-compensated normal modes studied in previous work by Tailleux and McWilliams. Idealized examples show that actual rays can often be reproduced with reasonable accuracy by replacing the actual dispersion relation by its QND approximation. The topographic regime provides an upper bound ( in general a large overestimate) of the maximum latitudinal excursions of actual rays. The method presented in this paper is interesting for enabling an optimal classification of purely azimuthally dispersive wave systems into simpler idealized QND wave regimes, which helps to rationalize previous empirical findings that the ray paths of long Rossby waves in the presence of mean flow and topography often seem to be independent of the wavenumber orientation. Two important side results are to establish that the baroclinic string function regime of Tyler and K se is only valid over a tiny range of the topographic parameter and that long baroclinic Rossby waves propagating over topography do not obey any two-dimensional potential vorticity conservation principle. Given the importance of the latter principle in geophysical fluid dynamics, the lack of it in this case makes the concept of the QND regimes all the more important, for they are probably the only alternative to provide a simple and economical description of general purely azimuthally dispersive wave systems.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
Neuropathic pain may arise following peripheral nerve injury though the molecular mechanisms associated with this are unclear. We used proteomic profiling to examine changes in protein expression associated with the formation of hyper-excitable neuromas derived from rodent saphenous nerves. A two-dimensional difference gel electrophoresis ( 2D-DIGE) profiling strategy was employed to examine protein expression changes between developing neuromas and normal nerves in whole tissue lysates. We found around 200 proteins which displayed a > 1.75-fold change in expression between neuroma and normal nerve and identified 55 of these proteins using mass spectrometry. We also used immunoblotting to examine the expression of low-abundance ion channels Nav1.3, Nav1.8 and calcium channel alpha 2 delta-1 subunit in this model, since they have previously been implicated in neuronal hyperexcitability associated with neuropathic pain. Finally, S(35)methionine in vitro labelling of neuroma and control samples was used to demonstrate local protein synthesis of neuron-specific genes. A number of cytoskeletal proteins, enzymes and proteins associated with oxidative stress were up-regulated in neuromas, whilst overall levels of voltage-gated ion channel proteins were unaffected. We conclude that altered mRNA levels reported in the somata of damaged DRG neurons do not necessarily reflect levels of altered proteins in hyper-excitable damaged nerve endings. An altered repertoire of protein expression, local protein synthesis and topological re-arrangements of ion channels may all play important roles in neuroma hyper-excitability.
Resumo:
In this paper, we give an overview of our studies by static and time-resolved X-ray diffraction of inverse cubic phases and phase transitions in lipids. In 1, we briefly discuss the lyotropic phase behaviour of lipids, focusing attention on non-lamellar structures, and their geometric/topological relationship to fusion processes in lipid membranes. Possible pathways for transitions between different cubic phases are also outlined. In 2, we discuss the effects of hydrostatic pressure on lipid membranes and lipid phase transitions, and describe how the parameters required to predict the pressure dependence of lipid phase transition temperatures can be conveniently measured. We review some earlier results of inverse bicontinuous cubic phases from our laboratory, showing effects such as pressure-induced formation and swelling. In 3, we describe the technique of pressure-jump synchrotron X-ray diffraction. We present results that have been obtained from the lipid system 1:2 dilauroylphosphatidylcholine/lauric acid for cubic-inverse hexagonal, cubic-cubic and lamellar-cubic transitions. The rate of transition was found to increase with the amplitude of the pressure-jump and with increasing temperature. Evidence for intermediate structures occurring transiently during the transitions was also obtained. In 4, we describe an IDL-based 'AXCESS' software package being developed in our laboratory to permit batch processing and analysis of the large X-ray datasets produced by pressure-jump synchrotron experiments. In 5, we present some recent results on the fluid lamellar-Pn3m cubic phase transition of the single-chain lipid 1-monoelaidin, which we have studied both by pressure-jump and temperature-jump X-ray diffraction. Finally, in 6, we give a few indicators of future directions of this research. We anticipate that the most useful technical advance will be the development of pressure-jump apparatus on the microsecond time-scale, which will involve the use of a stack of piezoelectric pressure actuators. The pressure-jump technique is not restricted to lipid phase transitions, but can be used to study a wide range of soft matter transitions, ranging from protein unfolding and DNA unwinding and transitions, to phase transitions in thermotropic liquid crystals, surfactants and block copolymers.
Resumo:
Nanocomposites of high-density polyethylene (HDPE) and carbon nanotubes (CNT) of different geometries (single wall, double wall, and multiwall; SWNT, DWNT, and MWNT) were prepared by in situ polymerization of ethylene on CNT whose surface had been previously treated with a metallocene catalytic system. In this work, we have studied the effects of applying the successive self-nucleation and annealing thermal fractionation technique (SSA) to the nanocomposites and have also determined the influence of composition and type of CNT on the isothermal crystallization behavior of the HDPE. SSA results indicate that all types of CNT induce the formation of a population of thicker lamellar crystals that melt at higher temperatures as compared to the crystals formed in neat HDPE prepared under the same catalytic and polymerization conditions and subjected to the same SSA treatment. Furthermore, the peculiar morphology induced by the CNT on the HDPE matrix allows the resolution of thermal fractionation to be much better. The isothermal crystallization results indicated that the strong nucleation effect caused by CNT reduced the supercooling needed for crystallization. The interaction between the HDPE chains and the surface of the CNT is probably very strong as judged by the results obtained, even though it is only physical in nature. When the total crystallinity achieved during isothermal crystallization is considered as a function of CNT content, it was found that a competition between nucleation and topological confinement could account for the results. At low CNT content the crystallinity increases (because of the nucleating effect of CNT on HDPE), however, at higher CNT content there is a dramatic reduction in crystallinity reflecting the increased confinement experienced by the HDPE chains at the interfaces which are extremely large in these nanocomposites. Another consequence of these strong interactions is the remarkable decrease in Avrami index as CNT content increases. When the Avrami index reduces to I or lower, nucleation dominates the overall kinetics as a consequence of confinement effects. Wide-angle X-ray experiments were performed at a high-energy synchrotron source and demonstrated that no change in the orthorhombic unit cell of HDPE occurred during crystallization with or without CNT.
Resumo:
The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea Price, 2001).
Resumo:
Rodney Brooks has been called the “Self Styled Bad Boy of Robotics”. In the 1990s he gained this dubious honour by orchestrating a string of highly evocative robots from his artificial interligence Labs at the Massachusettes Institute of Technology (MIT), Boston, USA.
Resumo:
The recursive circulant RC(2(n), 4) enjoys several attractive topological properties. Let max_epsilon(G) (m) denote the maximum number of edges in a subgraph of graph G induced by m nodes. In this paper, we show that max_epsilon(RC(2n,4))(m) = Sigma(i)(r)=(0)(p(i)/2 + i)2(Pi), where p(0) > p(1) > ... > p(r) are nonnegative integers defined by m = Sigma(i)(r)=(0)2(Pi). We then apply this formula to find the bisection width of RC(2(n), 4). The conclusion shows that, as n-dimensional cube, RC(2(n), 4) enjoys a linear bisection width. (c) 2005 Elsevier B.V. All rights reserved.