857 resultados para large-scale structures, filaments, clusters, radio galaxy, diffuse emission
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
Project work can involve multiple people from varying disciplines coming together to solve problems as a group. Large scale interactive displays are presenting new opportunities to support such interactions with interactive and semantically enabled cooperative work tools such as intelligent mind maps. In this paper, we present a novel digital, touch-enabled mind-mapping tool as a first step towards achieving such a vision. This first prototype allows an evaluation of the benefits of a digital environment for a task that would otherwise be performed on paper or flat interactive surfaces. Observations and surveys of 12 participants in 3 groups allowed the formulation of several recommendations for further research into: new methods for capturing text input on touch screens; inclusion of complex structures; multi-user environments and how users make the shift from single- user applications; and how best to navigate large screen real estate in a touch-enabled, co-present multi-user setting.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
We present a new computationally efficient method for large-scale polypeptide folding using coarse-grained elastic networks and gradient-based continuous optimization techniques. The folding is governed by minimization of energy based on Miyazawa–Jernigan contact potentials. Using this method we are able to substantially reduce the computation time on ordinary desktop computers for simulation of polypeptide folding starting from a fully unfolded state. We compare our results with available native state structures from Protein Data Bank (PDB) for a few de-novo proteins and two natural proteins, Ubiquitin and Lysozyme. Based on our simulations we are able to draw the energy landscape for a small de-novo protein, Chignolin. We also use two well known protein structure prediction software, MODELLER and GROMACS to compare our results. In the end, we show how a modification of normal elastic network model can lead to higher accuracy and lower time required for simulation.
Resumo:
Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.
Resumo:
In an earlier study, we reported on the excitation of large-scale vortices in Cartesian hydrodynamical convection models subject to rapid enough rotation. In that study, the conditions for the onset of the instability were investigated in terms of the Reynolds (Re) and Coriolis (Co) numbers in models located at the stellar North pole. In this study, we extend our investigation to varying domain sizes, increasing stratification, and place the box at different latitudes. The effect of the increasing box size is to increase the sizes of the generated structures, so that the principal vortex always fills roughly half of the computational domain. The instability becomes stronger in the sense that the temperature anomaly and change in the radial velocity are observed to be enhanced. The model with the smallest box size is found to be stable against the instability, suggesting that a sufficient scale separation between the convective eddies and the scale of the domain is required for the instability to work. The instability can be seen upto the colatitude of 30 degrees, above which value the flow becomes dominated by other types of mean flows. The instability can also be seen in a model with larger stratification. Unlike the weakly stratified cases, the temperature anomaly caused by the vortex structures is seen to depend on depth.
Resumo:
We present the first results of an observational programme undertaken to map the fine structure line emission of singly ionized carbon ([ CII] 157 : 7409 mum) over extended regions using a Fabry Perot spectrometer newly installed at the focal plane of a 100 cm balloon- borne far- infrared telescope. This new combination of instruments has a velocity resolution of similar to 200 km s(-1) and an angular resolution of 1.'5. During the first flight, an area of 30' x 15' in Orion A was mapped. These observations extend over a larger area than previous observations, the map is fully sampled and the spectral scanning method used enables reliable estimation of the continuum emission at frequencies adjacent to the [ CII] line. The total [ CII] line luminosity, calculated by considering up to 20% of the maximum line intensity is 0.04% of the luminosity of the far- infrared continuum. We have compared the [ CII] intensity distribution with the velocity- integrated intensity distributions of (CO)-C-13(1- 0), CI(1- 0) and CO( 3- 2) from the literature. Comparison of the [ CII], [ CI] and the radio continuum intensity distributions indicates that the largescale [ CII] emission originates mainly from the neutral gas, except at the position of M 43, where no [ CI] emission corresponding to the [ CII] emission is seen. Substantial part of the [ CII] emission from here originates from the ionized gas. The observed line intensities and ratios have been analyzed using the PDR models by Kaufman et al. ( 1999) to derive the incident UV flux and volume density at a few selected positions. The models reproduce the observations reasonably well at most positions excepting the [ CII] peak ( which coincides with the position of theta(1) Ori C). Possible reason for the failure could be the simplifying assumption of a homogeneous plane parallel slab in place of a more complicated geometry.
Resumo:
Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.
Resumo:
Spatial modulation (SM) is attractive for multiantenna wireless communications. SM uses multiple transmit antenna elements but only one transmit radio frequency (RF) chain. In SM, in addition to the information bits conveyed through conventional modulation symbols (e.g., QAM), the index of the active transmit antenna also conveys information bits. In this paper, we establish that SM has significant signal-to-noise (SNR) advantage over conventional modulation in large-scale multiuser (multiple-input multiple-output) MIMO systems. Our new contribution in this paper addresses the key issue of large-dimension signal processing at the base station (BS) receiver (e.g., signal detection) in large-scale multiuser SM-MIMO systems, where each user is equipped with multiple transmit antennas (e.g., 2 or 4 antennas) but only one transmit RF chain, and the BS is equipped with tens to hundreds of (e.g., 128) receive antennas. Specifically, we propose two novel algorithms for detection of large-scale SM-MIMO signals at the BS; one is based on message passing and the other is based on local search. The proposed algorithms achieve very good performance and scale well. For the same spectral efficiency, multiuser SM-MIMO outperforms conventional multiuser MIMO (recently being referred to as massive MIMO) by several dBs. The SNR advantage of SM-MIMO over massive MIMO can be attributed to: (i) because of the spatial index bits, SM-MIMO can use a lower-order QAM alphabet compared to that in massive MIMO to achieve the same spectral efficiency, and (ii) for the same spectral efficiency and QAM size, massive MIMO will need more spatial streams per user which leads to increased spatial interference.
Resumo:
Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.
Resumo:
Numerical study of three-dimensional evolution of wake-type flow and vortex dislocations is performed by using a compact finite diffenence-Fourier spectral method to solve 3-D incompressible Navier-Stokes equations. A local spanwise nonuniformity in momentum defect is imposed on the incoming wake-type flow. The present numerical results have shown that the flow instability leads to three-dimensional vortex streets, whose frequency, phase as well as the strength vary with the span caused by the local nonuniformity. The vortex dislocations are generated in the nonuniform region and the large-scale chain-like vortex linkage structures in the dislocations are shown. The generation and the characteristics of the vortex dislocations are described in detail.
Teracluster LSSC-II - Its Designing Principles and Applications in Large Scale Numerical Simulations
Resumo:
The teracluster LSSC-II installed at the State Key Laboratory of Scientific and Engineering Computing, Chinese Academy of Sciences is one of the most powerful PC clusters in China. It has a peek performance of 2Tflops. With a Linpack performance of 1.04Tflops, it is ranked at the 43rd place in the 20th TOP500 List (November 2002), 51st place in the 21st TOP500 List (June 2003), and the 82nd place in the 22nd TOP500 List (November 2003) with a new Linpack performance of 1.3Tflops. In this paper, we present some design principles of this cluster, as well as its applications in some largescale numerical simulations.
Resumo:
Nanostructured ZnO materials are of great significance for their potential applications in photoelectronic devices, light-emitting displays, catalysis and gas sensors. In this paper, we report a new method to produce large area periodical bowl-like micropatterns of single crystal ZnO through aqueous-phase epitaxial growth on a ZnO single crystal substrate. A self-assembled monolayer of polystyrene microspheres was used as a template to confine the epitaxial growth of single crystal ZnO from the substrate, while the growth morphology was well controlled by citrate anions. Moreover, it was found that the self-assembled monolayer of colloidal spheres plays an important role in reduction of the defect density in the epitaxial ZnO layer. Though the mechanism is still open for further investigation, the present result indicates a new route to suppress the dislocations in the fabrication of single crystal ZnO film. A predicable application of this new method is for the fabrication of two-dimensional photonic crystal structures on light emitting diode surfaces.
Resumo:
As an improvement of resolution of observations, more and more radio galaxies with radiojets have been identified and many fine structures in the radio jets yielded. In the presentpaper, the two-dimensional magnetohydrodynamical theory is applied to the analysis of themagnetic field configurations in the radio jefs. Two-dimensional results not only are con-sistent theoretically, but also explain the fine structures of observations. One of the theo-retical models is discussed in detail, and is in good agreement as compared with the observedradio jets of NGC6251. The results of the present paper also show that the magneticfields in the radio jets are mainly longitudinal ones and associate with the double sources ofQSOs if the magnetic field of the central object is stronger; the fields in the radio jets aremainly transverse ones and associate with the double sources of radio galaxies if the fieldof the central object is weaker. The magnetic field has great influence on the morphol-ogy and dynamic process.
Resumo:
About 1,200 ha of hydrilla ( Hydrilla verticillata L.f. Royle) was eliminated in the Spring Creek embayment of Lake Seminole, Georgia, using a drip-delivery application of fluridone (1- methyl-3-phenyl-5-[3-(trifluoromethl) phenyl]-4(1H)-pyridinone) in 2000 and 2001. Two groups of 15 and 20 largemouth bass (Micropterus salmoides Lacepede) were implanted with 400-day radio tags in February 2000 and 2001 to determine changes in movement and behavior before and after hydrilla reduction.(PDF contains 8 pages.)