929 resultados para 230112 Topology and Manifolds


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental question in protein folding is whether the coil to globule collapse transition occurs during the initial stages of folding (burst phase) or simultaneously with the protein folding transition. Single molecule fluorescence resonance energy transfer (FRET) and small-angle X-ray scattering (SAXS) experiments disagree on whether Protein L collapse transition occurs during the burst phase of folding. We study Protein L folding using a coarse-grained model and molecular dynamics simulations. The collapse transition in Protein L is found to be concomitant with the folding transition. In the burst phase of folding, we find that FRET experiments overestimate radius of gyration, R-g, of the protein due to the application of Gaussian polymer chain end-to-end distribution to extract R-g from the FRET efficiency. FRET experiments estimate approximate to 6 angstrom decrease in R-g when the actual decrease is approximate to 3 angstrom on guanidinium chloride denaturant dilution from 7.5 to 1 M, thereby suggesting pronounced compaction in the protein dimensions in the burst phase. The approximate to 3 angstrom decrease is close to the statistical uncertainties of the R-g data measured from SAXS experiments, which suggest no compaction, leading to a disagreement with the FRET experiments. The transition-state ensemble (TSE) structures in Protein L folding are globular and extensive in agreement with the Psi-analysis experiments. The results support the hypothesis that the TSE of single domain proteins depends on protein topology and is not stabilized by local interactions alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this work is to give the reader a basic introduction into the subject of topological groups, bringing together the areas of topology and group theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uma simulação numérica que leva em conta os efeitos de estratificação e mistura escalar (como a temperatura, salinidade ou substância solúvel em água) é necessária para estudar e prever os impactos ambientais que um reservatório de usina hidrelétrica pode produzir. Este trabalho sugere uma metodologia para o estudo de escoamentos ambientais, principalmente aqueles em que o conhecimento da interação entre a estratificação e mistura pode dar noções importantes dos fenômenos que ocorrem. Por esta razão, ferramentas de simulação numérica 3D de escoamento ambiental são desenvolvidas. Um gerador de malha de tetraedros do reservatório e o modelo de turbulência algébrico baseado no número de Richardson são as principais ferramentas desenvolvidas. A principal dificuldade na geração de uma malha de tetraedros de um reservatório é a distribuição não uniforme dos pontos relacionada com a relação desproporcional entre as escalas horizontais e verticais do reservatório. Neste tipo de distribuição de pontos, o algoritmo convencional de geração de malha de tetraedros pode tornar-se instável. Por esta razão, um gerador de malha não estruturada de tetraedros é desenvolvido e a metodologia utilizada para obter elementos conformes é descrita. A geração de malha superficial de triângulos utilizando a triangulação Delaunay e a construção do tetraedros a partir da malha triangular são os principais passos para o gerador de malha. A simulação hidrodinâmica com o modelo de turbulência fornece uma ferramenta útil e computacionalmente viável para fins de engenharia. Além disso, o modelo de turbulência baseado no número de Richardson leva em conta os efeitos da interação entre turbulência e estratificação. O modelo algébrico é o mais simples entre os diversos modelos de turbulência. Mas, fornece resultados realistas com o ajuste de uma pequena quantidade de parâmetros. São incorporados os modelos de viscosidade/difusividade turbulenta para escoamento estratificado. Na aproximação das equações médias de Reynolds e transporte de escalar é utilizando o Método dos Elementos Finitos. Os termos convectivos são aproximados utilizando o método semi-Lagrangeano, e a aproximação espacial é baseada no método de Galerkin. Os resultados computacionais são comparados com os resultados disponíveis na literatura. E, finalmente, a simulação de escoamento em um braço de reservatório é apresentada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to better understand the stratified combustion, the propagation of flame through stratified mixture field in laminar and turbulent flow conditions has been studied by using combined PIV/PLIF techniques. A great emphasis was placed on developing methods to improve the accuracy of local measurements of flame propagation. In particular, a new PIV approach has been developed to measure the local fresh gas velocity near preheat zone of flame front. To improve the resolution of measurement, the shape of interrogation window has been continuously modified based on the local flame topology and gas expansion effect. Statistical analysis of conditioned local measurements by the local equivalence ratio of flames allows the characterization of the properties of flame propagation subjected to the mixture stratification in laminar and turbulent flows, especially the highlight of the memory effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lattice materials are characterized at the microscopic level by a regular pattern of voids confined by walls. Recent rapid prototyping techniques allow their manufacturing from a wide range of solid materials, ensuring high degrees of accuracy and limited costs. The microstructure of lattice material permits to obtain macroscopic properties and structural performance, such as very high stiffness to weight ratios, highly anisotropy, high specific energy dissipation capability and an extended elastic range, which cannot be attained by uniform materials. Among several applications, lattice materials are of special interest for the design of morphing structures, energy absorbing components and hard tissue scaffold for biomedical prostheses. Their macroscopic mechanical properties can be finely tuned by properly selecting the lattice topology and the material of the walls. Nevertheless, since the number of the design parameters involved is very high, and their correlation to the final macroscopic properties of the material is quite complex, reliable and robust multiscale mechanics analysis and design optimization tools are a necessary aid for their practical application. In this paper, the optimization of lattice materials parameters is illustrated with reference to the design of a bracket subjected to a point load. Given the geometric shape and the boundary conditions of the component, the parameters of four selected topologies have been optimized to concurrently maximize the component stiffness and minimize its mass. Copyright © 2011 by ASME.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on the stiffness and strength of lattices with multiple hierarchical levels. We examine two-dimensional and three-dimensional lattices with up to three levels of structural hierarchy. At each level, the topology and the orientation of the lattice are prescribed, while the relative density is varied over a defined range. The properties of selected hierarchical lattices are obtained via a multiscale approach applied iteratively at each hierarchical level. The results help to quantify the effect that multiple orders of structural hierarchy produces on stretching and bending dominated lattices. Material charts for the macroscopic stiffness and strength illustrate how the property range of the lattices can expand as subsequent levels of hierarchy are added. The charts help to gain insight into the structural benefit that multiple hierarchies can impart to the macroscopic performance of a lattice. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design optimisation of compressor systems is a computationally expensive problem due to the large number of variables, complicated design space and expense of the analysis tools. One approach to reduce the expense of the process and make it achievable in industrial timescales is to employ multi-fidelity techniques, which utilise more rapid tools in conjunction with the highest fidelity analyses. The complexity of the compressor design landscape is such that the starting point for these optimisations can influence the achievable results; these starting points are often existing (optimised) compressor designs, which form a limited set in terms of both quantity and diversity of the design. To facilitate the multi-fidelity optimisation procedure, a compressor synthesis code was developed which allowed the performance attributes (e.g. stage loadings, inlet conditions) to be stipulated, enabling the generation of a variety of compressors covering a range of both design topology and quality to act as seeding geometries for the optimisation procedures. Analysis of the performance of the multi-fidelity optimisation system when restricting its exploration space to topologically different areas of the design space indicated little advantage over allowing the system to search the design space itself. However, comparing results from optimisations started from seed designs with different aerodynamic qualites indicated an improved performance could be achieved by starting an optimisation from a higher quality point, and thus that the choice of starting point did affect the final outcome of the optimisations. Both investigations indicated that the performance gains through the optimisation were largely defined by the early exploration of the design space where the multi-fidelity speedup could be exploited, thus extending this region is likely to have the greatest effect on performance of the optimisation system. © 2013 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design optimisation of compressor systems is a computationally expensive problem due to the large number of variables, complicated design space and expense of the analysis tools. One approach to reduce the expense of the process and make it achievable in industrial timescales is to employ multi-fidelity techniques, which utilise more rapid tools in conjunction with the highest fidelity analyses. The complexity of the compressor design landscape is such that the starting point for these optimisations can influence the achievable results; these starting points are often existing (optimised) compressor designs, which form a limited set in terms of both quantity and diversity of the design. To facilitate the multi-fidelity optimisation procedure, a compressor synthesis code was developed which allowed the performance attributes (e.g. stage loadings, inlet conditions) to be stipulated, enabling the generation of a variety of compressors covering a range of both design topology and quality to act as seeding geometries for the optimisation procedures. Analysis of the performance of the multi-fidelity optimisation system when restricting its exploration space to topologically different areas of the design space indicated little advantage over allowing the system to search the design space itself. However, comparing results from optimisations started from seed designs with different aerodynamic qualites indicated an improved performance could be achieved by starting an optimisation from a higher quality point, and thus that the choice of starting point did affect the final outcome of the optimisations. Both investigations indicated that the performance gains through the optimisation were largely defined by the early exploration of the design space where the multi-fidelity speedup could be exploited, thus extending this region is likely to have the greatest effect on performance of the optimisation system. © 2012 AIAA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

若尔盖高原湿地位于青藏高原东北部地区,平均海拔3,400-3,600m,是长江和黄河的自然分水区,区内发育了大面积的草本沼泽以及高寒沼泽化草甸、高寒湖泊。由于它所处的位置海拔高、气候波动较大,并处于我国三大自然区的交错过渡带,因而被认为是我国最为典型的脆弱湿地生态系统之一。由于地处偏远、自然环境条件恶劣等多方面的原因,针对若尔盖湿地的科学研究资料一直以来还非常缺乏。本文对国内外近年来在湿地生态系统甲烷排放过程、研究方法,以及关于湿地生态系统甲烷排放的影响因素进行了综述,并采用静态箱-气相色谱法,从湿地环境格局、湿地甲烷排放等方面,对若尔盖高原典型高寒湖泊湖滨不同类型湿地甲烷排放特征进行了研究,并进一步探讨了影响若尔盖高原高寒湖泊湖滨带甲烷排放的因素。得到如下结果:1.若尔盖高原花湖湖滨湿地在植物生长季(6 至8 月),甲烷排放平均速率为0.315 mg·m-2·h-1;不同月份间甲烷排放速率存在差异,分别为:-0.054、0.471、0.493 mg·m-2·h-1。不同类型湿地甲烷排放速率亦表现出差异,两栖蓼(Polygonum amphibium)湿地、滩涂和藏嵩草(Kobresia tibetica)草甸甲烷排放速率分别为:0.464、0.477、0.005mg·m-2·h-1。2.若尔盖高原花湖湖滨湿地甲烷排放速率与土壤10cm 温度显著相关。土壤温度是影响若尔盖高原花湖湖滨不同类型湿地甲烷排放的重要因素之一。随着土壤温度的升高,土壤微生物活性增强,使土壤中的氧消耗加快,氧化还原电位下降,有利于产甲烷菌的生长,从而增加土壤的甲烷产生量。3.地表水位与若尔盖高原花湖湖滨湿地甲烷排放速率相关性不显著。地表水覆盖,使得湿地土壤缺氧状况得到加强,增强了土壤中产甲烷菌的活性,促进甲烷形成,再通过植物、气泡或扩散的形式释放出土壤。但水层的加深,也使土壤中已产生的甲烷在通过气泡或扩散形式穿越水层时,被氧化的量增加,从而减少了甲烷向大气中的排放。4.植被高度以及植被地上生物量与若尔盖高原花湖湖滨带甲烷排放速率的相关性不显著。植物主要通过凋落物以及根系分泌物的输入为产甲烷菌提供基质,并作为土壤与大气之间的甲烷气体交换的传输途径;与其他环境因素共同影响湿地生态系统甲烷排放。The Zoige wetland on the eastern fringe of Qinghai-Tibetan Plateau, with averagealtitude between 3,400 and 3,600m, is the watershed of Yangtze River and YellowRiver. There are large area of peatland, subalpine meadow and lakes in this region.Due to its high elevation, transitional topology and high fluctuation of climate, theZoige wetlands represent one of the most fragile wetland ecosystems in China. And asa result of remote location and harsh environment conditions, the researches on theZoige wetland are relatively rare, especially the researches on the methane emissionfrom littoral zone of alpine lakes. Variations of methane emission rates as measuredby the method of static chamber – gas chromatography (GC) were detected fromlittoral zone of alpine lake on the Zoige Plateau. Relationships between methaneemission rates and environmental factors were analyzed. It is concluded that:1.The average methane emission rate in the littoral zone of Huahu Lake, ZoigePlateau is 0.315 mg·m-2·h-1, with evident spatial and temporal variations. The littoralzone has different methane effluxes with -0.054, 0.471, and 0.493 mg·CH4·m-2·h-1in June, July and August respectively. Different types of wetland have differentmethane emission rates, with value of 0.464, 0.477, and 0.005 mg·CH4·m-2·h-1 forPolygonum amphibium wetland ( PA ), Shoal ( S ) and Kobresi tibetica meadow ( KT ), respectively.2. The soil temperature at 10cm is significantly correlated with the methane effluxesin littoral zone of Huahu Lake, Zoige Plateau, and which is one of the most important factors influencing the methane emission from this region. The activities of soilmicroorganisms rise under higher soil temperature and increases oxygen consumptionand decreases Eh, which is in favor of the methanogensis, and enhances theproduction of methane in soil.3. The correlation between the standing water and methane effluxes from littoralzone of Huahu Lake is not significant. Because of the standing water, the anaerobicconditions of wetland soil have been enhanced, and are favor to the decomposition oforganic matter. And the anaerobic conditions strengthen the methanogensis’ activities,thus the methane production, which release to the atmosphere by diffusion, ebullitionand aerenchymal plants. With the water level’s increase, more methane produced insoil which is transferred by ebullitions or diffusion are oxidated, thus reduce themethane release to the atmosphere.4. The height and aboveground biomass of vegetation are not significant related tothe methane effluxes from littoral zone of Huahu Lake, Zoige Plateau. The vegetationprovides substrates for methanogensis by litter and root exudates; act as thetransportation way of methane between soil and atmosphere; influence the methaneemission of wetland ecosystems with other environment factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surface plasmon resonances of arrays of parallel copper nanowires, embedded in ion track-etched polycarbonate membranes, were investigated by systematic changes of nanowires’ topology and arrays area density. The extinction spectra exhibit two peaks which are attributed to interband transitions of Cu bulk metal and to a dipolar surface plasmon resonance, respectively. The resonances were investigated as a function of wire diameter and length, mean distance between adjacent wires, and angle of incidence of the light field with respect to the long wire axis. The dipolar peak shifts to larger wavelengths with increasing diameter and length, and diminishing mean distance between adjacent wires. Additionally, the shape effect on the dipolar peak is investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complete mitochondrial DNA (mtDNA) cytochrome b gene (1140 bp) was sequenced in Herzenstein macrocephalus and Gymnocypris namensis and in 13 other species and sub-species (n = 22), representing four closely related genera in the subfamily Schizothoracinae. Conflicting taxonomies of H. macrocephalus and G. namensis have been proposed because of the character instability among individuals. Parsimony, maximum likelihood and Bayesian methods produced phylogenetic trees with the same topology and resolved several distinctive clades. Previous taxonomic treatments, which variously placed these two species of separate genera or as sub-species, are inconsistent with the mtDNA phylogeny. Both H. macrocephalus and G. namensis appear in a well-supported clade, which also includes nine species of Schizopygopsis, and hence should be transferred to the genus Schizopygopsis. Morphological changes are further illustrated, and their adaptive evolution in response to the local habitat shifts during the speciation process appears to be responsible for conflicting views on the systematics of these two species and hence the contrasting taxonomic treatments. These species are endemic to the Qinghai-Tibetan Plateau, a region with a history of geological activity and a rich diversity of habitats that may have result in the parallel and reversal evolution of some morphological characters used in their taxonomies. Our results further suggest that speciation and morphological evolution of fishes in this region may be more complex than those previously expected. (c) 2007 The Authors Journal compilation (c) 2007 The Fisheries Society of the British Isles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a 75dB, 2.8mW, 100Hz-10kHz envelope detector in a 1.5mm 2.8V CMOS technology. The envelope detector performs input-dc-insensitive voltage-to-currentconverting rectification followed by novel nanopower current-mode peak detection. The use of a subthreshold wide- linear-range transconductor (WLR OTA) allows greater than 1.7Vpp input voltage swings. We show theoretically that this optimal performance is technology-independent for the given topology and may be improved only by spending more power. A novel circuit topology is used to perform 140nW peak detection with controllable attack and release time constants. The lower limits of envelope detection are determined by the more dominant of two effects: The first effect is caused by the inability of amplified high-frequency signals to exceed the deadzone created by exponential nonlinearities in the rectifier. The second effect is due to an output current caused by thermal noise rectification. We demonstrate good agreement of experimentally measured results with theory. The envelope detector is useful in low power bionic implants for the deaf, hearing aids, and speech-recognition front ends. Extension of the envelope detector to higher- frequency applications is straightforward if power consumption is inc

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effectiveness of service provisioning in largescale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed), or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: "How can we determine in a distributed and scalable manner the number and location of service facilities?" We propose an innovative approach in which topology and demand information is limited to neighborhoods, or balls of small radius around selected facilities, whereas demand information is captured implicitly for the remaining (remote) clients outside these neighborhoods, by mapping them to clients on the edge of the neighborhood; the ball radius regulates the trade-off between scalability and performance. We develop a scalable, distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within such balls. We show that even for small values of the radius (1 or 2), our distributed approach achieves performance under various synthetic and real Internet topologies that is comparable to that of optimal, centralized approaches requiring full topology and demand information.