987 resultados para Sub-seafloor modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel sub-word recognition (PSWR) is a new model that has been proposed for language identification (LID) which does not need elaborate phonetic labeling of the speech data in a foreign language. The new approach performs a front-end tokenization in terms of sub-word units which are designed by automatic segmentation, segment clustering and segment HMM modeling. We develop PSWR based LID in a framework similar to the parallel phone recognition (PPR) approach in the literature. This includes a front-end tokenizer and a back-end language model, for each language to be identified. Considering various combinations of the statistical evaluation scores, it is found that PSWR can perform as well as PPR, even with broad acoustic sub-word tokenization, thus making it an efficient alternative to the PPR system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work demonstrates the feasibility of mesoscale (100 μm to mm) punching of multiple holes of intricate shapes in metals. Analytical modeling, finite element (FE)simulation, and experimentations are used in this work. Two dimensional FE simulations in ABAQUS were done with an assumed material modeling and plane-strain condition. A known analytical model was used and compared with the ABAQUS simulation results to understand the effects of clearance between the punch and the die. FE simulation in ABAQUS was done for different clearances and corner radii at punch, die, and holder. A set of punches and dies were used to punch out a miniature spring-steel gripper. Comparison of compliant grippers manufactured by wire-cut electro discharge machining(EDM) and punching shows that realizing sharp interior and re-entrant corners by punching is not easy to achieve. Punching of circular holes with 5 mm and 2.5 mm diameter is achieved. The possibility of realizing meso-scale parts with complicated shapes through punching is demonstrated in this work; and some strategies are suggested for improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Affine transformations have proven to be very powerful for loop restructuring due to their ability to model a very wide range of transformations. A single multi-dimensional affine function can represent a long and complex sequence of simpler transformations. Existing affine transformation frameworks like the Pluto algorithm, that include a cost function for modern multicore architectures where coarse-grained parallelism and locality are crucial, consider only a sub-space of transformations to avoid a combinatorial explosion in finding the transformations. The ensuing practical tradeoffs lead to the exclusion of certain useful transformations, in particular, transformation compositions involving loop reversals and loop skewing by negative factors. In this paper, we propose an approach to address this limitation by modeling a much larger space of affine transformations in conjunction with the Pluto algorithm's cost function. We perform an experimental evaluation of both, the effect on compilation time, and performance of generated codes. The evaluation shows that our new framework, Pluto+, provides no degradation in performance in any of the Polybench benchmarks. For Lattice Boltzmann Method (LBM) codes with periodic boundary conditions, it provides a mean speedup of 1.33x over Pluto. We also show that Pluto+ does not increase compile times significantly. Experimental results on Polybench show that Pluto+ increases overall polyhedral source-to-source optimization time only by 15%. In cases where it improves execution time significantly, it increased polyhedral optimization time only by 2.04x.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.

An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).

The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.

A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.

Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.

Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reef fish distributions are patchy in time and space with some coral reef habitats supporting higher densities (i.e., aggregations) of fish than others. Identifying and quantifying fish aggregations (particularly during spawning events) are often top priorities for coastal managers. However, the rapid mapping of these aggregations using conventional survey methods (e.g., non-technical SCUBA diving and remotely operated cameras) are limited by depth, visibility and time. Acoustic sensors (i.e., splitbeam and multibeam echosounders) are not constrained by these same limitations, and were used to concurrently map and quantify the location, density and size of reef fish along with seafloor structure in two, separate locations in the U.S. Virgin Islands. Reef fish aggregations were documented along the shelf edge, an ecologically important ecotone in the region. Fish were grouped into three classes according to body size, and relationships with the benthic seascape were modeled in one area using Boosted Regression Trees. These models were validated in a second area to test their predictive performance in locations where fish have not been mapped. Models predicting the density of large fish (≥29 cm) performed well (i.e., AUC = 0.77). Water depth and standard deviation of depth were the most influential predictors at two spatial scales (100 and 300 m). Models of small (≤11 cm) and medium (12–28 cm) fish performed poorly (i.e., AUC = 0.49 to 0.68) due to the high prevalence (45–79%) of smaller fish in both locations, and the unequal prevalence of smaller fish in the training and validation areas. Integrating acoustic sensors with spatial modeling offers a new and reliable approach to rapidly identify fish aggregations and to predict the density large fish in un-surveyed locations. This integrative approach will help coastal managers to prioritize sites, and focus their limited resources on areas that may be of higher conservation value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics (MD) simulations of a polyethersulfone (PES) chain are carried out in the amorphous state by using the Dreiding 2.21 force field at four temperatures. Two types of molecular motion, i.e, rotations of phenylene rings and torsions of large segments containing two oxygen atoms, two sulfur atoms, and five phenylene rings on the backbone, are simulated. The modeling results show that the successive phenylene rings should be in-phase cooperative rotations, whereas the successive large segments should be out-of-phase cooperative torsions. By calculating the diffusion coefficient for the phenylene ring rotations, it is found that this rotation contributes to the beta -transition of PES.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surface-enhanced Raman scattering (SERS) spectra from molecules adsorbed on the surface of vertically aligned gold nanorod arrays exhibit a variation in enhancement factor (EF) as a function of excitation wavelength that displays little correlation with the elastic optical properties of the surface. The key to understanding this lack of correlation and to obtaining agreement between experimental and calculated EF spectra lies with consideration of randomly distributed, sub-10 nm gaps between nanorods forming the substrate. Intense fields in these enhancement “hot spots” make a dominant contribution to the Raman scattering and have a very different spectral profile to that of the elastic optical response. Detailed modeling of the electric field enhancement at both excitation and scattering wavelengths was used to quantitatively predict both the spectral profile and the magnitude of the observed EF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite their astrophysical significanceas a major contributor to cosmic nucleosynthesis and as distance indicators in observational cosmologyType Ia supernovae lack theoretical explanation. Not only is the explosion mechanism complex due to the interaction of (potentially turbulent) hydrodynamics and nuclear reactions, but even the initial conditions for the explosion are unknown. Various progenitor scenarios have been proposed. After summarizing some general aspects of Type Ia supernova modeling, recent simulations of our group are discussed. With a sequence of modeling starting (in some cases) from the progenitor evolution and following the explosion hydrodynamics and nucleosynthesis we connect to the formation of the observables through radiation transport in the ejecta cloud. This allows us to analyze several models and to compare their outcomes with observations. While pure deflagrations of Chandrasekhar-mass white dwarfs and violent mergers of two white dwarfs lead to peculiar events (that may, however, find their correspondence in the observed sample of SNe Ia), only delayed detonations in Chandrasekhar-mass white dwarfs or sub-Chandrasekhar-mass explosions remain promising candidates for explaining normal Type Ia supernovae. © 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During this research, we present a study on the thermal properties, such as the melting, cold crystallization, and glass transition temperatures as well as heat capacities from 293.15 K to 323.15 K of nine in-house synthesized protic ionic liquids based on the 3-(alkoxymethyl)-1H-imidazol-3-ium salicylate ([H-Im-C1OCn][Sal]) with n = 3–11. The 3D structures, surface charge distributions and COSMO volumes of all investigated ions are obtained by combining DFT calculations and the COSMO-RS methodology. The heat capacity data sets as a function of temperature of the 3-(alkoxymethyl)-1H-imidazol-3-ium salicylate are then predicted using the methodology originally proposed in the case of ionic liquids by Ge et al. 3-(Alkoxymethyl)-1H-imidazol-3-ium salicylate based ionic liquids present specific heat capacities higher in many cases than other ionic liquids that make them suitable as heat storage media and in heat transfer processes. It was found experimentally that the heat capacity increases linearly with increasing alkyl chain length of the alkoxymethyl group of 3-(alkoxymethyl)-1H-imidazol-3-ium salicylate as was expected and predicted using the Ge et al. method with an overall relative absolute deviation close to 3.2% for temperatures up to 323.15 K.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the 56Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical- infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M env ∼ 20 M , progenitor radius R ∼ 3 × 1013 cm (∼430 R), explosion energy E ∼ 1.5 foe, and initial 56Ni mass ∼0.06 M. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M of the Type IIP events. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over 1 million km2 of seafloor experience permanent low-oxygen conditions within oxygen minimum zones (OMZs). OMZs are predicted to grow as a consequence of climate change, potentially affecting oceanic biogeochemical cycles. The Arabian Sea OMZ impinges upon the western Indian continental margin at bathyal depths (150 - 1500 m) producing a strong depth dependent oxygen gradient at the sea floor. The influence of the OMZ upon the short term processing of organic matter by sediment ecosystems was investigated using in situ stable isotope pulse chase experiments. These deployed doses of 13C:15N labeled organic matter onto the sediment surface at four stations from across the OMZ (water depth 540 - 1100 m; [O2] = 0.35 - 15 μM). In order to prevent experimentally anoxia, the mesocosms were not sealed. 13C and 15N labels were traced into sediment, bacteria, fauna and 13C into sediment porewater DIC and DOC. However, the DIC and DOC flux to the water column could not be measured, limiting our capacity to obtain mass-balance for C in each experimental mesocosm. Linear Inverse Modeling (LIM) provides a method to obtain a mass-balanced model of carbon flow that integrates stable-isotope tracer data with community biomass and biogeochemical flux data from a range of sources. Here we present an adaptation of the LIM methodology used to investigate how ecosystem structure influenced carbon flow across the Indian margin OMZ. We demonstrate how oxygen conditions affect food-web complexity, affecting the linkages between the bacteria, foraminifera and metazoan fauna, and their contributions to benthic respiration. The food-web models demonstrate how changes in ecosystem complexity are associated with oxygen availability across the OMZ and allow us to obtain a complete carbon budget for the stationa where stable-isotope labelling experiments were conducted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study the well-posedness for a fourth-order parabolic equation modeling epitaxial thin film growth. Using Kato's Method [1], [2] and [3] we establish existence, uniqueness and regularity of the solution to the model, in suitable spaces, namelyC0([0,T];Lp(Ω)) where  with 1<α<2, n∈N and n≥2. We also show the global existence solution to the nonlinear parabolic equations for small initial data. Our main tools are Lp–Lq-estimates, regularization property of the linear part of e−tΔ2 and successive approximations. Furthermore, we illustrate the qualitative behavior of the approximate solution through some numerical simulations. The approximate solutions exhibit some favorable absorption properties of the model, which highlight the stabilizing effect of our specific formulation of the source term associated with the upward hopping of atoms. Consequently, the solutions describe well some experimentally observed phenomena, which characterize the growth of thin film such as grain coarsening, island formation and thickness growth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth in high data rate communication systems has introduced new high spectral efficient modulation techniques and standards such as LTE-A (long term evolution-advanced) for 4G (4th generation) systems. These techniques have provided a broader bandwidth but introduced high peak-to-average power ratio (PAR) problem at the high power amplifier (HPA) level of the communication system base transceiver station (BTS). To avoid spectral spreading due to high PAR, stringent requirement on linearity is needed which brings the HPA to operate at large back-off power at the expense of power efficiency. Consequently, high power devices are fundamental in HPAs for high linearity and efficiency. Recent development in wide bandgap power devices, in particular AlGaN/GaN HEMT, has offered higher power level with superior linearity-efficiency trade-off in microwaves communication. For cost-effective HPA design to production cycle, rigorous computer aided design (CAD) AlGaN/GaN HEMT models are essential to reflect real response with increasing power level and channel temperature. Therefore, large-size AlGaN/GaN HEMT large-signal electrothermal modeling procedure is proposed. The HEMT structure analysis, characterization, data processing, model extraction and model implementation phases have been covered in this thesis including trapping and self-heating dispersion accounting for nonlinear drain current collapse. The small-signal model is extracted using the 22-element modeling procedure developed in our department. The intrinsic large-signal model is deeply investigated in conjunction with linearity prediction. The accuracy of the nonlinear drain current has been enhanced through several issues such as trapping and self-heating characterization. Also, the HEMT structure thermal profile has been investigated and corresponding thermal resistance has been extracted through thermal simulation and chuck-controlled temperature pulsed I(V) and static DC measurements. Higher-order equivalent thermal model is extracted and implemented in the HEMT large-signal model to accurately estimate instantaneous channel temperature. Moreover, trapping and self-heating transients has been characterized through transient measurements. The obtained time constants are represented by equivalent sub-circuits and integrated in the nonlinear drain current implementation to account for complex communication signals dynamic prediction. The obtained verification of this table-based large-size large-signal electrothermal model implementation has illustrated high accuracy in terms of output power, gain, efficiency and nonlinearity prediction with respect to standard large-signal test signals.