880 resultados para Large-scale Structures
Resumo:
The geologic structures and metamorphic zonation of the northwestern Indian Himalaya contrast significantly with those in the central and eastern parts of the range, where the high-grade metamorphic rocks of the High Himalayan Crystalline (HHC) thrust southward over the weakly metamorphosed sediments of the Lesser Himalaya along the Main Central Thrust (MCT). Indeed, the hanging wall of the MCT in the NW Himalaya mainly consists of the greenschist facies metasediments of the Chamba zone, whereas HHC high-grade rocks are exposed more internally in the range as a large-scale dome called the Gianbul dome. This Gianbul dome is bounded by two oppositely directed shear zones, the NE-dipping Zanskar Shear Zone (ZSZ) on the northern flank and the SW-dipping Miyar Shear Zone (MSZ) on the southern limb. Current models for the emplacement of the HHC in NW India as a dome structure differ mainly in terms of the roles played by both the ZSZ and the MSZ during the tectonothermal evolution of the HHC. In both the channel flow model and wedge extrusion model, the ZSZ acts as a backstop normal fault along which the high-grade metamorphic rocks of the HHC of Zanskar are exhumed. In contrast, the recently proposed tectonic wedging model argues that the ZSZ and the MSZ correspond to one single detachment system that operates as a subhorizontal backthrust off of the MCT. Thus, the kinematic evolution of the two shear zones, the ZSZ and the MSZ, and their structural, metamorphic and chronological relations appear to be diagnostic features for discriminating the different models. In this paper, structural, metamorphic and geochronological data demonstrate that the MSZ and the ZSZ experienced two distinct kinematic evolutions. As such, the data presented in this paper rule out the hypothesis that the MSZ and the ZSZ constitute one single detachment system, as postulated by the tectonic wedging model. Structural, metamorphic and geochronological data are used to present an alternative tectonic model for the large-scale doming in the NW Indian Himalaya involving early NE-directed tectonics, weakness in the upper crust, reduced erosion at the orogenic front and rapid exhumation along both the ZSZ and the MSZ.
Resumo:
To date, published studies of alluvial bar architecture in large rivers have been restricted mostly to case studies of individual bars and single locations. Relatively little is known about how the depositional processes and sedimentary architecture of kilometre-scale bars vary within a multi-kilometre reach or over several hundreds of kilometres downstream. This study presents Ground Penetrating Radar and core data from 11, kilometre-scale bars from the Rio Parana, Argentina. The investigated bars are located between 30km upstream and 540km downstream of the Rio Parana - Rio Paraguay confluence, where a significant volume of fine-grained suspended sediment is introduced into the network. Bar-scale cross-stratified sets, with lengths and widths up to 600m and thicknesses up to 12m, enable the distinction of large river deposits from stacked deposits of smaller rivers, but are only present in half the surface area of the bars. Up to 90% of bar-scale sets are found on top of finer-grained ripple-laminated bar-trough deposits. Bar-scale sets make up as much as 58% of the volume of the deposits in small, incipient mid-channel bars, but this proportion decreases significantly with increasing age and size of the bars. Contrary to what might be expected, a significant proportion of the sedimentary structures found in the Rio Parana is similar in scale to those found in much smaller rivers. In other words, large river deposits are not always characterized by big structures that allow a simple interpretation of river scale. However, the large scale of the depositional units in big rivers causes small-scale structures, such as ripple sets, to be grouped into thicker cosets, which indicate river scale even when no obvious large-scale sets are present. The results also show that the composition of bars differs between the studied reaches upstream and downstream of the confluence with the Rio Paraguay. Relative to other controls on downstream fining, the tributary input of fine-grained suspended material from the Rio Paraguay causes a marked change in the composition of the bar deposits. Compared to the upstream reaches, the sedimentary architecture of the downstream reaches in the top ca 5m of mid-channel bars shows: (i) an increase in the abundance and thickness (up to metre-scale) of laterally extensive (hundreds of metres) fine-grained layers; (ii) an increase in the percentage of deposits comprised of ripple sets (to >40% in the upper bar deposits); and (iii) an increase in bar-trough deposits and a corresponding decrease in bar-scale cross-strata (<10%). The thalweg deposits of the Rio Parana are composed of dune sets, even directly downstream from the Rio Paraguay where the upper channel deposits are dominantly fine-grained. Thus, the change in sedimentary facies due to a tributary point-source of fine-grained sediment is primarily expressed in the composition of the upper bar deposits.
Resumo:
Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
Associée à d'autres techniques observationnelles, la polarimétrie dans le visible ou dans le proche infrarouge permet d'étudier la morphologie des champs magnétiques à la périphérie de nombreuses régions de formation stellaire. A l'intérieur des nuages molécualires la morphologie des champs est connue par polarimétrie submillimétrique, mais rarement pour les mêmes régions. Habituellement, il manque une échelle spatiale intermédiaire pour pouvoir comparer correctement la morphologie du champ magnétique galactique avec celle située à l'intérieur des nuages moléculaires. -- Cette thèse propose les moyens nécessaires pour réaliser ce type d'analyse multi-échelle afin de mieux comprendre le rôle que peuvent jouer les champs magnétiques dans les processus de formation stellaire. La première analyse traite de la région GF 9. Vient ensuite une étude de la morphologie du champ magnétique dans les filaments OMC-2 et OMC-3 suivie d'une analyse multi-échelle dans le complexe de nuages moléculaires Orion A dont OMC-2 et OMC-3 font partie. -- La synthèse des résultats couvrant GF 9 et Orion A est la suivante. Les approches statistiques employées montrent qu'aux grandes échelles spatiales la morphologie des champs magnétiques est poloïdale dans la région GF 9, et probablement hélicoïdale dans la région Orion A. A l'échelle spatiale des enveloppes des nuages moléculaires, les champs magnétiques apparaissent alignés avec les champs situés à leur périphérie. A l'échelle spatiale des coeurs, le champ magnétique poloïdal environnant la région GF 9 est apparemment entraîné par le coeur en rotation, et la diffusion ambipolaire n'y semble pas effective actuellement. Dans Orion A, la morphologie des champs est difficilement détectable dans les sites actifs de formation d'OMC-2, ou bien très fortement contrainte par les effets de la gravité dans OMC-1. Des effets probables de la turbulence ne seont détectés dans aucune des régions observées. -- Les analyses multi-échelles suggèrent donc qu'indépendamment du stade évolutif et de la gamme de masse des régions de formation stellaires, le champ magnétique galactique subit des modifications de sa morphologie aux échelles spatiales comparables à celles des coeurs protostellaires, de la même façon que les propriétés structurelles des nuages moléculaires suivent des lois d'autosimilarité jusqu'à des échelles comparables à celles des coeurs.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
In this paper, we address this problem through the design of a semiactive controller based on the mixed H2/H∞ control theory. The vibrations caused by the seismic motions are mitigated by a semiactive damper installed in the bottom of the structure. It is meant by semiactive damper, a device that absorbs but cannot inject energy into the system. Sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller that guarantees asymptotic stability and a mixed H2/H∞ performance is then developed. An algorithm is proposed to handle the semiactive nature of the actuator. The performance of the controller is experimentally evaluated in a real-time hybrid testing facility that consists of a physical specimen (a small-scale magnetorheological damper) and a numerical model (a large-scale three-story building)
Resumo:
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
Sensible heat fluxes (QH) are determined using scintillometry and eddy covariance over a suburban area. Two large aperture scintillometers provide spatially integrated fluxes across path lengths of 2.8 km and 5.5 km over Swindon, UK. The shorter scintillometer path spans newly built residential areas and has an approximate source area of 2-4 km2, whilst the long path extends from the rural outskirts to the town centre and has a source area of around 5-10 km2. These large-scale heat fluxes are compared with local-scale eddy covariance measurements. Clear seasonal trends are revealed by the long duration of this dataset and variability in monthly QH is related to the meteorological conditions. At shorter time scales the response of QH to solar radiation often gives rise to close agreement between the measurements, but during times of rapidly changing cloud cover spatial differences in the net radiation (Q*) coincide with greater differences between heat fluxes. For clear days QH lags Q*, thus the ratio of QH to Q* increases throughout the day. In summer the observed energy partitioning is related to the vegetation fraction through use of a footprint model. The results demonstrate the value of scintillometry for integrating surface heterogeneity and offer improved understanding of the influence of anthropogenic materials on surface-atmosphere interactions.
Resumo:
This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.
Resumo:
We investigate the impact of the existence of a primordial magnetic field on the filter mass, characterizing the minimum baryonic mass that can form in dark matter (DM) haloes. For masses below the filter mass, the baryon content of DM haloes are severely depressed. The filter mass is the mass when the baryon to DM mass ratio in a halo is equal to half the baryon to DM ratio of the Universe. The filter mass has previously been used in semi-analytic calculations of galaxy formation, without taking into account the possible existence of a primordial magnetic field. We examine here its effect on the filter mass. For homogeneous comoving primordial magnetic fields of B(0) similar to 1 or 2 nG and a re-ionization epoch that starts at a redshift z(s) = 11 and is completed at z(r) = 8, the filter mass is increased at redshift 8, for example, by factors of 4.1 and 19.8, respectively. The dependence of the filter mass on the parameters describing the re-ionization epoch is investigated. Our results are particularly important for the formation of low-mass galaxies in the presence of a homogeneous primordial magnetic field. For example, for B(0) similar to 1 nG and a re-ionization epoch of z(s) similar to 11 and z(r) similar to 7, our results indicate that galaxies of total mass M similar to 5 x 108 M(circle dot) need to form at redshifts z(F) greater than or similar to 2.0, and galaxies of total mass M similar to 108 M(circle dot) at redshifts z(F) greater than or similar to 7.7.
Resumo:
A general approach is presented for implementing discrete transforms as a set of first-order or second-order recursive digital filters. Clenshaw's recurrence formulae are used to formulate the second-order filters. The resulting structure is suitable for efficient implementation of discrete transforms in VLSI or FPGA circuits. The general approach is applied to the discrete Legendre transform as an illustration.
Resumo:
Atmospheric circulation modes are important concepts in understanding the variability of atmospheric dynamics. Assuming their spatial patterns to be fixed, such modes are often described by simple indices from rather short observational data sets. The increasing length of reanalysis products allows these concepts and assumptions to be scrutinised. Here we investigate the stability of spatial patterns of Northern Hemisphere teleconnections by using the Twentieth Century Reanalysis as well as several control and transient millennium-scale simulations with coupled models. The observed and simulated centre of action of the two major teleconnection patterns, the North Atlantic Oscillation (NAO) and to some extent the Pacific North American (PNA), are not stable in time. The currently observed dipole pattern of the NAO, its centre of action over Iceland and the Azores, split into a north–south dipole pattern in the western Atlantic with a wave train pattern in the eastern part, connecting the British Isles with West Greenland and the eastern Mediterranean during the period 1940–1969 AD. The PNA centres of action over Canada are shifted southwards and over Florida into the Gulf of Mexico during the period 1915–1944 AD. The analysis further shows that shifts in the centres of action of either teleconnection pattern are not related to changes in the external forcing applied in transient simulations of the last millennium. Such shifts in their centres of action are accompanied by changes in the relation of local precipitation and temperature with the overlying atmospheric mode. These findings further undermine the assumption of stationarity between local climate/proxy variability and large-scale dynamics inherent when using proxy-based reconstructions of atmospheric modes, and call for a more robust understanding of atmospheric variability on decadal timescales.