951 resultados para Time-Fractional Diffusion-Wave Problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a new, compact derivation of state-space formulae for the so-called discretisation-based solution of the H∞ sampled-data control problem. Our approach is based on the established technique of continuous time-lifting, which is used to isometrically map the continuous-time, linear, periodically time-varying, sampled-data problem to a discretetime, linear, time-invariant problem. State-space formulae are derived for the equivalent, discrete-time problem by solving a set of two-point, boundary-value problems. The formulae accommodate a direct feed-through term from the disturbance inputs to the controlled outputs of the original plant and are simple, requiring the computation of only a single matrix exponential. It is also shown that the resultant formulae can be easily re-structured to give a numerically robust algorithm for computing the state-space matrices. © 1997 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

D Liang from Cambridge University explains the shallow water equations and their applications to the dam-break and other steep-fronted flow modeling. They assume that the horizontal scale of the flow is much greater than the vertical scale, which means the flow is restricted within a thin layer, thus the vertical momentum is insignificant and the pressure distribution is hydrostatic. The left hand sides of the two momentum equations represent the acceleration of the fluid particle in the horizontal plane. If the fluid acceleration is ignored, then the two momentum equations are simplified into the so-called diffusion wave equations. In contrast to the SWEs approach, it is much less convenient to model floods with the Navier-Stokes equations. In conventional computational fluid dynamics (CFD), cumbersome treatments are needed to accurately capture the shape of the free surface. The SWEs are derived using the assumptions of small vertical velocity component, smooth water surface, gradual variation and hydrostatic pressure distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

使用截止期单调(DM)调度算法和分布式优先级冲顶资源访问控制协议(DPCP)的实时CORBA系统中,当节点的本地优先级个数不足时,必须将多个全局优先级映射成一个本地优先级.这需要:①判定映射后任务可调度性的充分必要条件;②减少时间复杂度的映射算法.为此,推导出判定条件,确定了DGPM映射算法.该算法在保证系统可调度的前提下分配任务,或者证明映射后系统不可调度.证明了DGPM算法能调度其他直序列优先级映射算法可调度的任务和GCS集合.判定条件和算法在实际项目中得到了应用.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nonlinear optical properties of Al-doped nc-Si-SiO_2 composite films have been investigated using the time-resolved four-wave mixing technique with a femtosecond laser. The off-resonant third-order nonlinear susceptibility is observed to be 1.0 × 10~(-10) esu at 800nm. The relaxation time of the optical nonlinearity in the films is as short as 60fs. The optical nonlinearity is enhanced due to the quantum confinement of electrons in Si nanocrystals embedded in the SiO_2 films. The enhanced optical nonlinearity does not originate from Al dopant because there are no Al clusters in the films.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The differential and integral cross sections for electron impact excitation of lithium from the ground state 1s(2)2s to excited states 1s(2)2p, 1s(2)3l (l = s,p,d) and 1s(2)4l (l = s,p,d,f) at incident energies ranging from 5 eV to 25 eV are calculated by using a full relativistic distorted wave method. The target state wavefunctions are calculated by using the Grasp92 code. The continuum orbitals are computed in the distorted-wave approximation, in which the direct and exchange potentials among all the electrons are included. A part of the cross sections are compared with the available experimental data and with the previous theoretical values. It is found that, for the integral cross sections, the present calculations are in good agreement with the time-independent distorted wave method calculation, for differential cross sections, our results agree with the experimental data very well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate three-dimensional time-dependent quantum wave packet calculations for the N+OH reaction on the (3)A' potential energy surface [Guadagnini, Schatz, and Walch, J. Chem. Phys. 102, 774 (1995)] have been carried out. The calculations show for the first time that the initial state-selected reaction probabilities are dominated by resonance structures, and the lifetime of the resonance is generally in the subpicosecond time scale. The calculated reaction cross sections indicate that they are a decreasing function of the translational energy, which is in agreement qualitatively with the quasiclassical trajectory calculations. The rate constants obtained from the quantum mechanical calculations are consistent with the quasiclassical trajectory results and the experimental measurements. (C) 2003 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surface wave propagation in the anisotropic media and S-wave splitting in China mainland are focused in this M.S. dissertation. We firstly introduced Anderson parameters in the research of surface wave propagation in the anisotropic media were deduced, respectively. By applying the given initial model to the forward calculation of Love wave, we compared dispersion curves of Love wave in the anisotropic media with the one in the isotropic media. the results show that, although the two kind of results are similar with each other, the effect of anisotropy can not be neglected. Furthermore, the variation of anisotropy factors will result in the variation of dispersion curves, especially for high-mode one. The method of grid dispersion inversion was then described for further tectonic inversion. We also deduced inversion equation on the condition that the layered media is anisotropic, and calculated the phase-velocity partial derivatives with respect to the model parameters, P- and S-wave velocities, density, anisotropic parameters for Rayleigh wave and Love wave. Having analyzed the results of phase-velocity partial derivatives, we concluded that the derivatives within each period decreased with the depth increasing, the phase-velocity of surface wave is sensitive to the S-wave velocities and anisotropic factors and is not sensitive to the densities of layers. Dispersion data of Love wave from the events occurred during the period from 1991 to 1998 around the Qinghai and Tibet Plateau, which magnitudes are more than 5.5, have been used in the grid dispersion inversion. Those data have been preprocessed and analyzed in the F-T domain. Then the results of 1°*1° grid dispersion inversion, the pure path dispersion data, in the area of Qianghai and Tibet Plateau were obtained. As an example, dispersion data have been input for the tectonic inversion in the anisotropic media, and the results of anisotropic factors under the region of Qianghai and Tibet Plateau were initially discussed. As for the other part of this dissertation. We first introduced the phenomena of S-wave splitting and the methods for calculation the splitting parameters. Then, We applied Butterworth band-pass filter to S-wave data recorded at 8 stations in China mainland, and analyzed S-wave splitting at different frequency bands. The results show the delay time and the fast polarization directions of S-wave splitting depend upon the frequency bands. There is an absence of S-wave splitting at the station of Wulumuqi (WMQ) for the band of 0.1-0.2Hz. With the frequency band broaden, the delay time of S-wave splitting decreases at the stations of Beijing (BJI), Enshi (ENH), Kunming (KMI) and Mudanjiang (MDJ); the fast polarization direction at Enshi (ENH) changes from westward to eastward, and eastward to westward at Hailaer (HIA). The variations of delay time with bands at Lanzhou (LZH) and qiongzhong (QIZ) are similar, and there is a coherent trend of fast polarization directions at BJI, KMI and MDJ respectively. Initial interpretations to the results of frequency band-dependence of S-wave splitting were also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Circum-Bohai region (112°~124°E, 34°~42°N ), there exists rich gas-petroleum while inner-plate seismic activity is robust. Although the tectonic structure of this region is very complicated, plenty of geological, geophysical and geochemical researches have been carried out.In this paper, guided by the ideas of "One, Two, Three and Many" and "The depth controls the shallow, the regional constrains the local", I fully take advantage of previous results so as to establish a general image of this region. After collecting the arrival-time of P-wave phases of local events and tele-seismic events recorded by the stations within this region from 1966 to 2004, I process all these data and build an initial model. Then, a tomography image of crust and upper-mantle of this region is obtained. With reference to previous results, we compare the image of various depths and five cross-profiles traverse this region along different direction. And finally, a discussion and conclusion is made.The principle contents is listed as below: 1) in the first chapter, the purpose and meaning of this thesis, the advance in seismic tomography, and the research contents and blue-print is stated; 2) in the second chapter, I introduce the regional geological setting of Circum-Bohai region, describe the tectonic and evolutionary characteristics of principle tectonic units, including Bohai Bay Basin, Yanshan Fold Zone, Taihangshan Uplifted Zone, Jiao-Niao Uplifted Zone and Luxi Uplifted Zone, and primary deep faults; 3) In the third chapter, the previous geophysical researches, i.e., gravity and geomagnetic characters, geothermal flow, seismic activity, physical character of rocks, deep seismic sounding, and previous seismic tomography, are discussed; 4) in the fourth chapter, the fundamental theory and approach of seismic tomography is introduced; 5) in the fifth chapter, the technology and approaches used in this thesis, including collecting and pre-processing of data, the establishment of initial velocity model and relocation of all events; 6) in the sixth chapter, I discuss and analyze the tomography image of various depth and five cross-sections; 7)in the seventh chapter, I make a conclusion of the results, state the existing problems and possible solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of diffusive shock acceleration relies on the efficacy with which hydromagnetic waves can scatter charged particles in the precursor of a shock. The growth of self-generated waves is driven by both resonant and non-resonant processes. We perform high-resolution magnetohydrodynamic simulations of the non-resonant cosmic ray driven instability, in which the unstable waves are excited beyond the linear regime. In a snapshot of the resultant field, particle transport simulations are carried out. The use of a static snapshot of the field is reasonable given that the Larmor period for particles is typically very short relative to the instability growth time. The diffusion rate is found to be close to, or below, the Bohm limit for a range of energies. This provides the first explicit demonstration that self-excited turbulence reduces the diffusion coefficient and has important implications for cosmic-ray transport and acceleration in supernova remnants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-dependent close-coupling (TDCC), R-matrix-with-pseudostates (RMPS), and time-independent distorted-wave (TIDW) methods are used to calculate electron-impact ionization cross sections for the carbon atom. The TDCC and RMPS results for the 1s22s22p2 ground configuration are in reasonable agreement with the available experimental measurements, while the TIDW results are 30% higher. Ionization of the 1s22s2p3 excited configuration is performed using the TDCC, RMPS, and TIDW methods. Ionization of the 1s22s22p3l (l=0–2) excited configurations is performed using the TDCC and TIDW methods. The ionization cross sections for the excited configurations are much larger than for the ground state. For example, the peak cross section for the 1s22s22p3p excited configuration is an order of magnitude larger than the peak cross section for the 1s22s22p2 ground configuration. The TDCC results are again found to be substantially lower than the TIDW results. The ionization cross-section results will permit the generation of more accurate, generalized collisional-radiative ionization coefficients needed for modeling moderately dense carbon plasmas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consideramos o problema de controlo óptimo de tempo mínimo para sistemas de controlo mono-entrada e controlo afim num espaço de dimensão finita com condições inicial e final fixas, onde o controlo escalar toma valores num intervalo fechado. Quando aplicamos o método de tiro a este problema, vários obstáculos podem surgir uma vez que a função de tiro não é diferenciável quando o controlo é bang-bang. No caso bang-bang os tempos conjugados são teoricamente bem definidos para este tipo de sistemas de controlo, contudo os algoritmos computacionais directos disponíveis são de difícil aplicação. Por outro lado, no caso suave o conceito teórico e prático de tempos conjugados é bem conhecido, e ferramentas computacionais eficazes estão disponíveis. Propomos um procedimento de regularização para o qual as soluções do problema de tempo mínimo correspondente dependem de um parâmetro real positivo suficientemente pequeno e são definidas por funções suaves em relação à variável tempo, facilitando a aplicação do método de tiro simples. Provamos, sob hipóteses convenientes, a convergência forte das soluções do problema regularizado para a solução do problema inicial, quando o parâmetro real tende para zero. A determinação de tempos conjugados das trajectórias localmente óptimas do problema regularizado enquadra-se na teoria suave conhecida. Provamos, sob hipóteses adequadas, a convergência do primeiro tempo conjugado do problema regularizado para o primeiro tempo conjugado do problema inicial bang-bang, quando o parâmetro real tende para zero. Consequentemente, obtemos um algoritmo eficiente para a computação de tempos conjugados no caso bang-bang.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As centrais termoelétricas convencionais convertem apenas parte do combustível consumido na produção de energia elétrica, sendo que outra parte resulta em perdas sob a forma de calor. Neste sentido, surgiram as unidades de cogeração, ou Combined Heat and Power (CHP), que permitem reaproveitar a energia dissipada sob a forma de energia térmica e disponibilizá-la, em conjunto com a energia elétrica gerada, para consumo doméstico ou industrial, tornando-as mais eficientes que as unidades convencionais Os custos de produção de energia elétrica e de calor das unidades CHP são representados por uma função não-linear e apresentam uma região de operação admissível que pode ser convexa ou não-convexa, dependendo das caraterísticas de cada unidade. Por estas razões, a modelação de unidades CHP no âmbito do escalonamento de geradores elétricos (na literatura inglesa Unit Commitment Problem (UCP)) tem especial relevância para as empresas que possuem, também, este tipo de unidades. Estas empresas têm como objetivo definir, entre as unidades CHP e as unidades que apenas geram energia elétrica ou calor, quais devem ser ligadas e os respetivos níveis de produção para satisfazer a procura de energia elétrica e de calor a um custo mínimo. Neste documento são propostos dois modelos de programação inteira mista para o UCP com inclusão de unidades de cogeração: um modelo não-linear que inclui a função real de custo de produção das unidades CHP e um modelo que propõe uma linearização da referida função baseada na combinação convexa de um número pré-definido de pontos extremos. Em ambos os modelos a região de operação admissível não-convexa é modelada através da divisão desta àrea em duas àreas convexas distintas. Testes computacionais efetuados com ambos os modelos para várias instâncias permitiram verificar a eficiência do modelo linear proposto. Este modelo permitiu obter as soluções ótimas do modelo não-linear com tempos computationais significativamente menores. Para além disso, ambos os modelos foram testados com e sem a inclusão de restrições de tomada e deslastre de carga, permitindo concluir que este tipo de restrições aumenta a complexidade do problema sendo que o tempo computacional exigido para a resolução do mesmo cresce significativamente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La mesure traditionnelle de la criminalité (taux pour 100 000 habitants) pose problème dans l'analyse des variations de la criminalité dans le temps ou l'espace. Le problème est dû au fait que le taux de criminalité est essentiellement déterminé par des infractions moins graves et très fréquentes. La présente étude a permis de tester l’utilité du nouvel outil développé par Statistique Canada qui procure un index de « gravité de la criminalité » dans lequel chaque crime est pondéré par son score de gravité (basé sur les décisions sentencielles moyennes au Canada de 2002 à 2007 pour chaque forme de crime). Appliquées aux statistiques officielles du Québec de 1977 à 2008, nos analyses montrent que l’indice de gravité s’avère une mesure utile pour dresser un portrait plus juste des tendances des crimes violents d’une année à l’autre. Plus exactement, l’indice de gravité montre que le taux de crimes violents est demeuré stable de 1977 à 1992 contrairement à l'image fournie par le taux traditionnel qui montre plutôt une montée fulgurante durant cette période. L’indice de gravité peut également être utile à l’égard des crimes violents pour comparer plus adéquatement les territoires entre eux afin d’établir ceux qui présentent une criminalité plus grave. Cependant, à l’égard de la criminalité globale et des crimes sans violence, l’indice de gravité n’est d’aucune utilité et présente la même lecture de la criminalité que la mesure traditionnelle. Cela s’explique par le fait que ce sont toujours les mêmes infractions (les vols, les méfaits et les introductions par effraction) qui contribuent majoritairement à chacune des deux mesures de la criminalité.