857 resultados para large-scale observation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Large earthquakes, such as the Chile earthquake in 1960 and the Sumatra-Andaman earthquake on Dec 26, 2004 in Indonesia, have generated the Earth’s free oscillations. The eigenfrequencies of the Earth’s free oscillations are closely related to the Earth’s internal structures. The conventional methods, which mainly focus on calculating the eigenfrequecies by analytical ways, and the analysis on observations can not easily study the whole processes from earthquake occurrence to the Earth’s free oscillation inspired. Therefore, we try to use numerical method incorporated with large-scale parallel computing to study on the Earth’s free oscillations excited by giant earthquakes. We first give a review of researches and developments of the Earth’s free oscillation, and basical theories under spherical coordinate system. We then give a review of the numerical simulation of seismic wave propagation and basical theories of spectral element method to simulate global seismic wave propagation. As a first step to study the Earth’s free oscillations, we use a finite element method to simulate the propagation of elastic waves and the generation of oscillations of the chime bell of Marquis Yi of Zeng, by striking different parts of the bell, which possesses the oval crosssection. The bronze chime bells of Marquis Yi of Zeng are precious cultural relics of China. The bells have a two-tone acoustic characteristic, i.e., striking different parts of the bell generates different tones. By analysis of the vibration in the bell and the spectrum analysis, we further help the understanding of the mechanism of two-tone acoustic characteristics of the chime bell of Marquis Yi of Zeng. The preliminary calculations have clearly shown that two different modes of oscillation can be generated by striking different parts of the bell, and indicate that finite element numerical simulation of the processes of wave propagation and two-tone generation of the chime bell of Marquis Yi of Zeng is feasible. These analyses provide a new quantitative and visual way to explain the mystery of the two-tone acoustic characteristics. The method suggested by this study can be applied to simulate free oscillations excited by great earthquakes with complex Earth structure. Taking into account of such large-scale structure of the Earth, small-scale low-precision numerical simulation can not simply meet the requirement. The increasing capacity in high-performance parallel computing and progress on fully numerical solutions for seismic wave fields in realistic three-dimensional spherical models, Spectral element method and high-performance parallel computing were incorporated to simulate the seismic wave propagation processes in the Earth’s interior, without the effects of the Earth’s gravitational potential. The numerical simulation shows that, the results of the toroidal modes of our calculation agree well with the theoretical values, although the accuracy of our results is much limited, the calculated peaks are little distorted due to three-dimensional effects. There exist much great differences between our calculated values of spheroidal modes and theoretical values, because we don’t consider the effect the Earth’ gravitation in numerical model, which leads our values are smaller than the theoretical values. When , is much smaller, the effect of the Earth’s gravitation make the periods of spheroidal modes become shorter. However, we now can not consider effects of the Earth’s gravitational potential into the numerical model to simulate the spheroidal oscillations, but those results still demonstrate that, the numerical simulation of the Earth’s free oscillation is very feasible. We make the numerical simulation on processes of the Earth’s free oscillations under spherically symmetric Earth model using different special source mechanisms. The results quantitatively show that Earth’s free oscillations excited by different earthquakes are different, and oscillations at different locations are different for free oscillation excited by the same earthquake. We also explore how the Earth’s medium attenuation will take effects on the Earth’s free oscillations, and take comparisons with the observations. The medium attenuation can make influences on the Earth’s free oscillations, though the effects on lower-frequency fundamental oscillations are weak. At last, taking 2008 Wenchuan earthquake for example, we employ spectral element method incorporated with large-scale parallel computing technology to investigate the characteristics of seismic wave propagation excited by Wenchuan earthquake. We calculate synthetic seismograms with one-point source model and three-point source model respectively. Full 3-D visualization of the numerical results displays the profile of the seismic wave propagation with respect to time. The three-point source, which was proposed by the latest investigations through field observation and reverse estimation, can better demonstrate the spatial and temporal characteristics of the source rupture processes than one-point source. Primary results show that those synthetic signals calculated from three-point source agree well with the observations. This can further reveal that the source rupturing process of Wenchuan earthquake is a multi-rupture process, which is composed by at least three or more stages of rupture processes. In conclusion, the numerical simulation can not only solve some problems concluding the Earth’s ellipticity and anisotropy, which can be easily solved by conventional methods, but also finally solve the problems concluding topography model and lateral heterogeneity. We will try to find a way to fully implement self-gravitation in spectral element method in future, and do our best to continue researching the Earth’s free oscillations using the numerical simulations to see how the Earth’ lateral heterogeneous will affect the Earth’s free oscillations. These will make it possible to bring modal spectral data increasingly to bear on furthering our understanding of the Earth’s three-dimensional structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the improving of mantle convection theory, the developing of computing method and increasing of the measurement data, we can numerically simulate more clearly about the effects on some geophysical observed phenomenons such as the global heat flow and global lithospheric stress field in the Earth's surface caused by mantle convection, which is the primary mechanism for the transport of heat from the Earth's deep interior to its surface and the underlying force mechanism of dynamics in the Earth.Chapter 1 reviews the historical background and present research state of mantle convection theory.In Chapter 2, the basic conception of thermal convection and the basic theory about mantle flow.The effects on generation and distribution of global lithospheric stres s field induced by mantle flow are the subject of Chapter 3. Mantle convection causes normal stress and tangential stresses at the bottom of the lithosphere, and then the sublithospheric stress field induces the lithospheric deformation as sixrface force and results in the stress field within the lithosphere. The simulation shows that the agreement between predictions and observations is good in most regions. Most of subduction zones and continental collisions are under compressive. While ocean ridges, such as the east Pacific ridge, the Atlantic ridge and the east African rift valley, are under tensile. And most of the hotspots preferentially occur in regions where calculated stress is tensile. The calculated directions of the most compressive principal horizontal stress are largely in accord with that of the observation except for some regions such as the NW-Pacifie subduction zone and Qinghai-Tibet Plateau, in which the directions of the most compressive principal horizontal stress are different. It shows that the mantel flow plays an important role in causing or affecting the large-scale stress field within the lithosphere.The global heat flow simulation based on a kinematic model of mantle convection is given in Chapter 4. Mantle convection velocities are calculated based on the internal loading theory at first, the velocity field is used as the input to solve the thermal problem. Results show that calculated depth derivatives of the near surface temperature are closely correlated to the observed surface heat flow pattern. Higher heat flow values around midocean ridge systems can be reproduced very well. The predicted average temperature as a function of function of depth reveals that there are two thermal boundary layers, one is close to the surface and another is close to the core-mantle boundary, the rest of the mantle is nearly isothermal. Although, in most of the mantle, advection dominates the heat transfer, the conductive heat transfer is still locally important in the boundary layers and plays an important role for the surface heat flow pattern. The existence of surface plates is responsible for the long wavelength surface heat flow pattern.In Chapter 5, the effects on present-day crustal movement in the China Mainland resulted from the mantle convection are introduced. Using a dynamic method, we present a quantitative model for the present-day crustal movement in China. We consider not only the effect of the India-Eurasia collision, the gravitational potential energy difference of the Tibet Plateau, but also the contribution of the shear traction on the bottom of the lithosphere induced by the global mantle convection. The comparison between our results and the velocity field obtained from the GPS observation shows that our model satisfactorily reproduces the general picture of crustal deformation in China. Numerical modeling results reveal that the stress field on the base of the lithosphere induced by the mantle flow is probably a considerable factor that causes the movement and deformation of the lithosphere in continental China with its eflfcet focuing on the Eastern China A numerical research on the small-scale convection with variable viscosity in the upper mantle is introduced in Chapter 6. Based on a two-dimensional model, small-scale convection in the mantle-lithosphere system with variable viscosity is researched by using of finite element method. Variation of viscosity in exponential form with temperature is considered in this paper The results show that if viscosity is strongly temperature-dependent, the upper part of the system does not take a share in the convection and a stagnant lid, which is identified as lithosphere, is formed on the top of system because of low temperature and high viscosity. The calculated surface heat flow, topography and gravity anomaly are associated well with the convection pattern, namely, the regions with high heat flow and uplift correspond to the upwelling flow, and vice versa.In Chapter 7, we give a brief of future research subject: The inversion of lateral density heterogeneity in the mantle by minimizing the viscous dissipation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Q. Shen. Rough feature selection for intelligent classifiers. LNCS Transactions on Rough Sets, 7:244-255, 2007.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Breen, Andrew; Fallows, R. A.; Thomasson, P.; Bisi, M. M., 'Extremely long baseline interplanetary scintillation measurements of solar wind velocity', Journal of Geophysical Research (2006) 111(A8) pp.A08104 RAE2008

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Javier G. P. Gamarra and Ricard V. Sole (2002). Biomass-diversity responses and spatial dependencies in disturbed tallgrass prairies. Journal of Theoretical Biology, 215 (4) pp.469-480 RAE2008

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Anterior inferotemporal cortex (ITa) plays a key role in visual object recognition. Recognition is tolerant to object position, size, and view changes, yet recent neurophysiological data show ITa cells with high object selectivity often have low position tolerance, and vice versa. A neural model learns to simulate both this tradeoff and ITa responses to image morphs using large-scale and small-scale IT cells whose population properties may support invariant recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an Eulerian-based numerical model of particle degradation in dilute-phase pneumatic conveying systems including bends of different angles. The model shows reasonable agreement with detailed measurements from a pilot-sized pneumatic conveying system and a much larger scale pneumatic conveyor. The potential of the model to predict degradation in a large-scale conveying system from an industrial plant is demonstrated. The importance of the effect of the bend angle on the damage imparted to the particles is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigates the use of computer modelled versus directly experimentally determined fire hazard data for assessing survivability within buildings using evacuation models incorporating Fractionally Effective Dose (FED) models. The objective is to establish a link between effluent toxicity, measured using a variety of small and large scale tests, and building evacuation. For the scenarios under consideration, fire simulation is typically used to determine the time non-survivable conditions develop within the enclosure, for example, when smoke or toxic effluent falls below a critical height which is deemed detrimental to evacuation or when the radiative fluxes reach a critical value leading to the onset of flashover. The evacuation calculation would the be used to determine whether people within the structure could evacuate before these critical conditions develop.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Mediterranean Sea is located in a crossroad of mid-latitude and subtropical climatic modes that enhance contrasting environmental conditions over both latitudinal and longitudinal ranges. Here, we show that the large-scale environmental forcing is reflected in the basin scale trends of the adult population of the calanoid copepod Centropages typicus. The species is distributed over the whole Mediterranean basin, and maximal abundances were found in the north-western basin associated to oceanic fronts, and in the Adriatic Sea associated to shallow and semi enclosed waters. The peak of main abundances of C. typicus correlates with the latitudinal temperature gradient and the highest seasonal abundances occurred in spring within the 14–18°C temperature window. Such thermal cline may define the latitudinal geographic region where C. typicus seasonally dominates the >200 μm-sized spring copepod community in the Mediterranean Sea. The approach used here is generally applicable to investigate the large-scale spatial patterns of other planktonic organisms and to identify favourable environmental windows for population development.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transient micronutrient enrichment of the surface ocean can enhance phytoplankton growth rates and alter microbial community structure with an ensuing spectrum of biogeochemical feedbacks. Strong phytoplankton responses to micronutrients supplied by volcanic ash have been reported recently. Here we: (i) synthesize findings from these recent studies; (ii) report the results of a new remote sensing study of ash fertilization; and (iii) calculate theoretical bounds of ash-fertilized carbon export. Our synthesis highlights that phytoplankton responses to ash do not always simply mimic that of iron amendment; the exact mechanisms for this are likely biogeochemically important but are not yet well understood. Inherent optical properties of ash-loaded seawater suggest rhyolitic ash biases routine satellite chlorophyll-a estimation upwards by more than an order of magnitude for waters with <0.1 mg chlorophyll-a m-3, and less than a factor of 2 for systems with >0.5 mg chlorophyll-a m-3. For this reason post-ash-deposition chlorophyll-a changes in oligotrophic waters detected via standard Case 1 (open ocean) algorithms should be interpreted with caution. Remote sensing analysis of historic events with a bias less than a factor of 2 provided limited stand-alone evidence for ash-fertilization. Confounding factors were poor coverage, incoherent ash dispersal, and ambiguity ascribing biomass changes to ash supply over other potential drivers. Using current estimates of iron release and carbon export efficiencies, uncertainty bounds of ash-fertilized carbon export for 3 events are presented. Patagonian iron supply to the Southern Ocean from volcanic eruptions is less than that of windblown dust on thousand year timescales but can dominate supply at shorter timescales. Reducing uncertainties in remote sensing of phytoplankton response and nutrient release from ash are avenues for enabling assessment of the oceanic response to large-scale transient nutrient enrichment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phytoplankton size structure is an important indicator of the state of the pelagic ecosystem. Stimulated by the paucity of in situ observations on size structure, and by the sampling advantages of autonomous remote platforms, new efforts are being made to infer the size-structure of the phytoplankton from oceanographic variables that may be measured at high temporal and spatial resolution, such as total chlorophyll concentration. Large-scale analysis of in situ data has revealed coherent relationships between size-fractionated chlorophyll and total chlorophyll that can be quantified using the three-component model of Brewin et al. (2010). However, there are variations surrounding these general relationships. In this paper, we first revise the three-component model using a global dataset of surface phytoplankton pigment measurements. Then, using estimates of the average irradiance in the mixed-layer, we investigate the influence of ambient light on the parameters of the three-component model. We observe significant relationships between model parameters and the average irradiance in the mixed-layer, consistent with ecological knowledge. These relationships are incorporated explicitly into the three-component model to illustrate variations in the relationship between size-structure and total chlorophyll, ensuing from variations in light availability. The new model may be used as a tool to investigate modifications in size-structure in the context of a changing climate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A queue manager (QM) is a core traffic management (TM) function used to provide per-flow queuing in access andmetro networks; however current designs have limited scalability. An on-demand QM (OD-QM) which is part of a new modular field-programmable gate-array (FPGA)-based TM is presented that dynamically maps active flows to the available physical resources; its scalability is derived from exploiting the observation that there are only a few hundred active flows in a high speed network. Simulations with real traffic show that it is a scalable, cost-effective approach that enhances per-flow queuing performance, thereby allowing per-flow QM without the need for extra external memory at speeds up to 10 Gbps. It utilizes 2.3%–16.3% of a Xilinx XC5VSX50t FPGA and works at 111 MHz.