993 resultados para Modeling Languages


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.

An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).

The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.

A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.

Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.

Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

n-heptane/air premixed turbulent flames in the high-Karlovitz portion of the thin reaction zone regime are characterized and modeled in this thesis using Direct Numerical Simulations (DNS) with detailed chemistry. In order to perform these simulations, a time-integration scheme that can efficiently handle the stiffness of the equations solved is developed first. A first simulation with unity Lewis number is considered in order to assess the effect of turbulence on the flame in the absence of differential diffusion. A second simulation with non-unity Lewis numbers is considered to study how turbulence affects differential diffusion. In the absence of differential diffusion, minimal departure from the 1D unstretched flame structure (species vs. temperature profiles) is observed. In the non-unity Lewis number case, the flame structure lies between that of 1D unstretched flames with "laminar" non-unity Lewis numbers and unity Lewis number. This is attributed to effective Lewis numbers resulting from intense turbulent mixing and a first model is proposed. The reaction zone is shown to be thin for both flames, yet large chemical source term fluctuations are observed. The fuel consumption rate is found to be only weakly correlated with stretch, although local extinctions in the non-unity Lewis number case are well correlated with high curvature. These results explain the apparent turbulent flame speeds. Other variables that better correlate with this fuel burning rate are identified through a coordinate transformation. It is shown that the unity Lewis number turbulent flames can be accurately described by a set of 1D (in progress variable space) flamelet equations parameterized by the dissipation rate of the progress variable. In the non-unity Lewis number flames, the flamelet equations suggest a dependence on a second parameter, the diffusion of the progress variable. A new tabulation approach is proposed for the simulation of such flames with these dimensionally-reduced manifolds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is a comprised of three different projects within the topic of tropical atmospheric dynamics. First, I analyze observations of thermal radiation from Saturn’s atmosphere and from them, determine the latitudinal distribution of ammonia vapor near the 1.5-bar pressure level. The most prominent feature of the observations is the high brightness temperature of Saturn’s subtropical latitudes on either side of the equator. After comparing the observations to a microwave radiative transfer model, I find that these subtropical bands require very low ammonia relative humidity below the ammonia cloud layer in order to achieve the high brightness temperatures observed. We suggest that these bright subtropical bands represent dry zones created by a meridionally overturning circulation.

Second, I use a dry atmospheric general circulation model to study equatorial superrotation in terrestrial atmospheres. A wide range of atmospheres are simulated by varying three parameters: the pole-equator radiative equilibrium temperature contrast, the convective lapse rate, and the planetary rotation rate. A scaling theory is developed that establishes conditions under which superrotation occurs in terrestrial atmospheres. The scaling arguments show that superrotation is favored when the off-equatorial baroclinicity and planetary rotation rates are low. Similarly, superrotation is favored when the convective heating strengthens, which may account for the superrotation seen in extreme global-warming simulations.

Third, I use a moist slab-ocean general circulation model to study the impact of a zonally-symmetric continent on the distribution of monsoonal precipitation. I show that adding a hemispheric asymmetry in surface heat capacity is sufficient to cause symmetry breaking in both the spatial and temporal distribution of precipitation. This spatial symmetry breaking can be understood from a large-scale energetic perspective, while the temporal symmetry breaking requires consideration of the dynamical response to the heat capacity asymmetry and the seasonal cycle of insolation. Interestingly, the idealized monsoonal precipitation bears resemblance to precipitation in the Indian monsoon sector, suggesting that this work may provide insight into the causes of the temporally asymmetric distribution of precipitation over southeast Asia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general definition of interpreted formal language is presented. The notion “is a part of" is formally developed and models of the resulting part theory are used as universes of discourse of the formal languages. It is shown that certain Boolean algebras are models of part theory.

With this development, the structure imposed upon the universe of discourse by a formal language is characterized by a group of automorphisms of the model of part theory. If the model of part theory is thought of as a static world, the automorphisms become the changes which take place in the world. Using this formalism, we discuss a notion of abstraction and the concept of definability. A Galois connection between the groups characterizing formal languages and a language-like closure over the groups is determined.

It is shown that a set theory can be developed within models of part theory such that certain strong formal languages can be said to determine their own set theory. This development is such that for a given formal language whose universe of discourse is a model of part theory, a set theory can be imbedded as a submodel of part theory so that the formal language has parts which are sets as its discursive entities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El objetivo principal de esta tesis doctoral es, en primer lugar, ofrecer una reconstrucción alternativa del protoainu para, en segundo lugar, aplicar conceptos de tipología diacrónicaholística con el fin de discernir algún patrón evolutivo que ayude a responder a la pregunta:¿por qué la lengua ainu es como es en su contexto geolingüístico (lengua AOV con prefijos),cuando en la región euroasiática lo normal es encontrar el perfil 'lengua AOV con sufijos'? En suma, se trata de explorar las posibilidades que ofrece la tipología diacrónica holística,combinada con métodos más tradicionales, en la investigación de las etapas prehistóricas delenguas aisladas, es decir, sin parientes conocidos, como el ainu, el vasco, el zuñi o elburushaski. Este trabajo se divide en tres grandes bloques con un total de ocho capítulos, unapéndice con las nuevas reconstrucciones protoainúes y la bibliografía.El primer bloque se abre con el capítulo 1, donde se hace una breve presentación delas lenguas ainus y su filología. El capítulo 2 está dedicado a la reconstrucción de la fonologíaprotoainu. La reconstrucción pionera pertenece a A. Vovin (1992), que de hecho sirve comobase sobre la que ampliar, corregir o modificar nuevos elementos. En el capítulo 3 se describela morfología histórica de las lenguas ainus. En el capítulo 4 se investiga esta opción dentrode un marco más amplio que tiene como objetivo analizar los patrones elementales deformación de palabras. El capítulo 5, con el que se inicia el segundo bloque, da cabida a lapresentación de una hipótesis tipológica diacrónica, a cargo de P. Donegan y D. Stampe, conla que especialistas en lenguas munda y mon-khmer han sido capaces de alcanzar unreconstrucción del protoaustroasiático según la cual el tipo aglutinante de las lenguas mundasería secundario, frente al original monosilábico de las lenguas mon-khmer. En el capítulo 6se retoma la perspectiva tradicional de la lingüística geográfica, pero no se olvidan algunas delas consideraciones tipológicas apuntadas en el capítulo anterior (el hecho de que la hipótesisde Donegan y Stampe no funcione con el ainu no significa que la tipología diacrónica nopueda ser todavía de utilidad). En el capítulo 7 se presentan algunas incongruencias queresultan tras combinar las supuestas evidencias arqueológicas con el escenario lingüísticodescrito en capítulos anteriores. Las conclusiones generales se presentan en el capítulo 8. Elapéndice es una tabla comparativa con las dos reconstrucciones disponibles a fecha de hoypara la lengua protoainu, es decir, las propuestas por A. Vovin en su estudio seminal de 1992y en el capítulo 3 de la presente tesis. Dicha tabla incluye 686 reconstrucciones (puedehacerse una sencilla referencia cruzada con Vovin, puesto que ambas están ordenadasalfabéticamente).

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we study a simple mathematical model of a bilingual community in which all agents are f luent in the majority language but only a fraction of the population has some degree of pro ficiency in the minority language. We investigate how different distributions of pro ficiency, combined with the speaker´attitudes towards or against the minority language, may infl uence its use in pair conversations.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent history of psychology and cognitive neuroscience, the notion of habit has been reduced to a stimulus-triggered response probability correlation. In this paper we use a computational model to present an alternative theoretical view (with some philosophical implications), where habits are seen as self-maintaining patterns of behavior that share properties in common with self-maintaining biological processes, and that inhabit a complex ecological context, including the presence and influence of other habits. Far from mechanical automatisms, this organismic and self-organizing concept of habit can overcome the dominating atomistic and statistical conceptions, and the high temporal resolution effects of situatedness, embodiment and sensorimotor loops emerge as playing a more central, subtle and complex role in the organization of behavior. The model is based on a novel "iterant deformable sensorimotor medium (IDSM)," designed such that trajectories taken through sensorimotor-space increase the likelihood that in the future, similar trajectories will be taken. We couple the IDSM to sensors and motors of a simulated robot, and show that under certain conditions, the IDSM conditions, the IDSM forms self-maintaining patterns of activity that operate across the IDSM, the robot's body, and the environment. We present various environments and the resulting habits that form in them. The model acts as an abstraction of habits at a much needed sensorimotor "meso-scale" between microscopic neuron-based models and macroscopic descriptions of behavior. Finally, we discuss how this model and extensions of it can help us understand aspects of behavioral self-organization, historicity and autonomy that remain out of the scope of contemporary representationalist frameworks.

Relevância:

20.00% 20.00%

Publicador: