869 resultados para Hyperbolic Boundary-Value Problem
Resumo:
This study detailed the structure of turbulence in the air-side and water-side boundary layers in wind-induced surface waves. Inside the air boundary layer, the kurtosis is always greater than 3 (the value for normal distribution) for both horizontal and vertical velocity fluctuations. The skewness for the horizontal velocity is negative, but the skewness for the vertical velocity is always positive. On the water side, the kurtosis is always greater than 3, and the skewness is slightly negative for the horizontal velocity and slightly positive for the vertical velocity. The statistics of the angle between the instantaneous vertical fluctuation and the instantaneous horizontal velocity in the air is similar to those obtained over solid walls. Measurements in water show a large variance, and the peak is biased towards negative angles. In the quadrant analysis, the contribution of quadrants Q2 and Q4 is dominant on both the air side and the water side. The non-dimensional relative contributions and the concentration match fairly well near the interface. Sweeps in the air side (belonging to quadrant Q4) act directly on the interface and exert pressure fluctuations, which, in addition to the tangential stress and form drag, lead to the growth of the waves. The water drops detached from the crest and accelerated by the wind can play a major role in transferring momentum and in enhancing the turbulence level in the water side.On the air side, the Reynolds stress tensor's principal axes are not collinear with the strain rate tensor, and show an angle α σ≈=-20°to-25°. On the water side, the angle is α σ≈=-40°to-45°. The ratio between the maximum and the minimum principal stresses is σ a/σ b=3to4 on the air side, and σ a/σ b=1.5to3 on the water side. In this respect, the air-side flow behaves like a classical boundary layer on a solid wall, while the water-side flow resembles a wake. The frequency of bursting on the water side increases significantly along the flow, which can be attributed to micro-breaking effects - expected to be more frequent at larger fetches. © 2012 Elsevier B.V.
Resumo:
This paper describes recent improvements to the Cambridge Arabic Large Vocabulary Continuous Speech Recognition (LVCSR) Speech-to-Text (STT) system. It is shown that wordboundary context markers provide a powerful method to enhance graphemic systems by implicit phonetic information, improving the modelling capability of graphemic systems. In addition, a robust technique for full covariance Gaussian modelling in the Minimum Phone Error (MPE) training framework is introduced. This reduces the full covariance training to a diagonal covariance training problem, thereby solving related robustness problems. The full system results show that the combined use of these and other techniques within a multi-branch combination framework reduces the Word Error Rate (WER) of the complete system by up to 5.9% relative. Copyright © 2011 ISCA.
Resumo:
Superhydrophobic surfaces are shown to be effective for surface drag reduction under laminar regime by both experiments and simulations (see for example, Ou and Rothstein, Phys. Fluids 17:103606, 2005). However, such drag reduction for fully developed turbulent flow maintaining the Cassie-Baxter state remains an open problem due to high shear rates and flow unsteadiness of turbulent boundary layer. Our work aims to develop an understanding of mechanisms leading to interface breaking and loss of gas pockets due to interactions with turbulent boundary layers. We take advantage of direct numerical simulation of turbulence with slip and no-slip patterned boundary conditions mimicking the superhydrophobic surface. In addition, we capture the dynamics of gas-water interface, by deriving a proper linearized boundary condition taking into account the surface tension of the interface and kinematic matching of interface deformation and normal velocity conditions on the wall. We will show results from our simulations predicting the dynamical behavior of gas pocket interfaces over a wide range of dimensionless surface tensions.
Resumo:
Interactions between dislocations and grain boundaries play an important role in the plastic deformation of polycrystalline metals. Capturing accurately the behaviour of these internal interfaces is particularly important for applications where the relative grain boundary fraction is significant, such as ultra fine-grained metals, thin films and microdevices. Incorporating these micro-scale interactions (which are sensitive to a number of dislocation, interface and crystallographic parameters) within a macro-scale crystal plasticity model poses a challenge. The innovative features in the present paper include (i) the formulation of a thermodynamically consistent grain boundary interface model within a microstructurally motivated strain gradient crystal plasticity framework, (ii) the presence of intra-grain slip system coupling through a microstructurally derived internal stress, (iii) the incorporation of inter-grain slip system coupling via an interface energy accounting for both the magnitude and direction of contributions to the residual defect from all slip systems in the two neighbouring grains, and (iv) the numerical implementation of the grain boundary model to directly investigate the influence of the interface constitutive parameters on plastic deformation. The model problem of a bicrystal deforming in plane strain is analysed. The influence of dissipative and energetic interface hardening, grain misorientation, asymmetry in the grain orientations and the grain size are systematically investigated. In each case, the crystal response is compared with reference calculations with grain boundaries that are either 'microhard' (impenetrable to dislocations) or 'microfree' (an infinite dislocation sink). © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Most of the current understanding of tip leakage flows has been derived from detailed cascade experiments. However, the cascade model is inherently approximate since it is difficult to simulate the boundary conditions present in a real machine, particularly the secondary flows convecting from the upstream stator row and the relative motion of the casing and blade. This problem is further complicated when considering the high pressure turbine rotors of aero engines, where the high Mach numbers must also be matched in order to correctly model the aerodynamics and heat transfer. More realistic tests can be performed on high-speed turbines, but the experimental fidelity and resolution achievable in such set-ups is limited. In order to examine the differences between cascade models and real-engine behavior, the influence of boundary conditions on the tip leakage flow in an unshrouded high pressure turbine rotor is investigated using RANS calculations. This study examines the influence of the rotor inlet condition and relative casing motion. A baseline calculation with a simplified inlet condition and no relative endwall motion exhibits similar behavior to cascade studies. Only minor changes to the leakage flow are induced by introducing either a more realistic inlet condition or relative casing motion. However when both of these conditions are applied simultaneously the pattern of leakage flow is very different, with ingestion of flow over much of the early suction surface. The paper explores the physical processes driving this change and the impact on leakage losses and modeling requirements. Copyright © 2013 by ASME.
Resumo:
The boundary condition at the solid surface is one of the important problems for the microfluidics. In this paper we study the effects of the channel sizes on the boundary conditions (BC), using the hybrid computation scheme adjoining the molecular dynamics (MD) simulations and the continuum fluid mechanics. We could reproduce the three types of boundary conditions (slip, no-slip and locking) over the multiscale channel sizes. The slip lengths are found to be mainly dependent on the interfacial parameters with the fixed apparent shear rate. The channel size has little effects on the slip lengths if the size is above a critical value within a couple of tens of molecular diameters. We explore the liquid particle distributions nearest the solid walls and found that the slip boundary condition always corresponds to the uniform liquid particle distributions parallel to the solid walls, while the no-slip or locking boundary conditions correspond to the ordered liquid structures close to the solid walls. The slip, no-slip and locking interfacial parameters yield the positive, zero and negative slip lengths respectively. The three types of boundary conditions existing in "microscale" still occur in "macroscale". However, the slip lengths weakly dependent on the channel sizes yield the real shear rates and the slip velocity relative to the solid wall traveling speed approaching those with the no-slip boundary condition when the channel size is larger than thousands of liquid molecular diameters for all of the three types of interfacial parameters, leading to the quasi-no-slip boundary conditions.
Resumo:
An augmented immersed interface method (IIM) is proposed for simulating one-phase moving contact line problems in which a liquid drop spreads or recoils on a solid substrate. While the present two-dimensional mathematical model is a free boundary problem, in our new numerical method, the fluid domain enclosed by the free boundary is embedded into a rectangular one so that the problem can be solved by a regular Cartesian grid method. We introduce an augmented variable along the free boundary so that the stress balancing boundary condition is satisfied. A hybrid time discretization is used in the projection method for better stability. The resultant Helmholtz/Poisson equations with interfaces then are solved by the IIM in an efficient way. Several numerical tests including an accuracy check, and the spreading and recoiling processes of a liquid drop are presented in detail. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Summer diets of two sympatric raptors Upland Buzzards (Buteo hemilasius Temminck et Schlegel) and Eurasian Eagle Owls (Bubo bubo L. subsp. Hemachalana Hume) were studied in an alpine meadow (3250 m a.s.l.) on Qinghai-Tibet Plateau, China. Root voles Microtus oeconomus Pallas, plateau pikas Ochotona curzoniae Hodgson, Gansu pikas O. cansus Lyon and plateau zokors Myospalax baileyi Thomas were the main diet components of Upland Buzzards as identified through the pellets analysis with the frequency of 57, 20, 19 and 4%, respectively. The four rodent species also were the main diet components of Eurasian Eagle Owls basing on the pellets and prey leftovers analysis with the frequency of 53, 26, 13 and 5%, respectively. The food niche breadth indexes of Upland Buzzards and Eurasian Eagle Owls were 1.60 and 1.77 respectively (higher value of the index means the food niche of the raptor is broader), and the diet overlap index of the two raptors was larger (C-ue = 0.90) (the index range from 0 - no overlap - to I - complete overlap). It means that the diets of Upland Buzzards and Eurasian Eagle Owls were similar (Two Related Samples Test, Z = -0.752, P = 0.452). The classical resource partitioning theory can not explain the coexistence of Upland Buzzards and Eurasian Eagle Owls in alpine meadows of Qinghai-Tibet Plateau. However, differences in body size, predation mode and activity rhythm between Upland Buzzards and Eurasian Eagle Owls may explain the coexistence of these two sympatric raptors.
Resumo:
In the increasingly enlarged exploration target, deep target layer(especially for the reservoir of lava) is a potential exploration area. As well known, the reflective energy becomes weak because the seismic signals of reflection in deep layer are absorbed and attenuate by upper layer. Caustics and multi-values traveltime in wavefield are aroused by the complexity of stratum. The ratio of signal to noise is not high and the fold numbers are finite(no more than 30). All the factors above affect the validity of conventional processing methods. So the high S/N section of stack can't always be got with the conventional stack methods even if the prestack depth migration is used. So it is inevitable to develop another kind of stack method instead. In the last a few years, the differential solution of wave equation was hold up by the condition of computation. Kirchhoff integral method rose in the initial stages of the ninetieth decade of last century. But there exist severe problems in it, which is are too difficult to resolve, so new method of stack is required for the oil and gas exploration. It is natural to think about upgrading the traditionally physic base of seismic exploration methods and improving those widely used techniques of stack. On the other hand, great progress is depended on the improvement in the wave differential equation prestack depth migration. The algorithm of wavefield continuation in it is utilized. In combination with the wavefield extrapolation and the Fresnel zone stack, new stack method is carried out It is well known that the seismic wavefield observed on surface comes from Fresnel zone physically, and doesn't comes from the same reflection points only. As to the more complex reflection in deep layer, it is difficult to describe the relationship between the reflective interface and the travel time. Extrapolation is used to eliminate caustic and simplify the expression of travel time. So the image quality is enhanced by Fresnel zone stack in target. Based on wave equation, high-frequency ray solution and its character are given to clarify theoretical foundation of the method. The hyperbolic and parabolic travel time of the reflection in layer media are presented in expression of matrix with paraxial ray theory. Because the reflective wave field mainly comes from the Fresnel Zone, thereby the conception of Fresnel Zone is explained. The matrix expression of Fresnel zone and projected Fresnel zone are given in sequence. With geometrical optics, the relationship between object point in model and image point in image space is built for the complex subsurface. The travel time formula of reflective point in the nonuniform media is deduced. Also the formula of reflective segment of zero-offset and nonzero offset section is provided. For convenient application, the interface model of subsurface and curve surface derived from conventional stacks DMO stack and prestack depth migration are analyzed, and the problem of these methods was pointed out in aspects of using data. Arc was put forward to describe the subsurface, thereby the amount of data to stack enlarged in Fresnel Zone. Based on the formula of hyperbolic travel time, the steps of implementation and the flow of Fresnel Zone stack were provided. The computation of three model data shows that the method of Fresnel Zone stack can enhance the signal energy and the ratio of signal to noise effectively. Practical data in Xui Jia Wei Zhi, a area in Daqing oilfield, was processed with this method. The processing results showed that the ability in increasing S/N ratio and enhancing the continuity of weak events as well as confirming the deep configuration of volcanic reservoir is better than others. In deeper target layer, there exists caustic caused by the complex media overburden and the great variation of velocity. Travel time of reflection can't be exactly described by the formula of travel time. Extrapolation is bring forward to resolve the questions above. With the combination of the phase operator and differential operator, extrapolating operator adaptable to the variation of lateral velocity is provided. With this method, seismic records were extrapolated from surface to any different deptlis below. Wave aberration and caustic caused by the inhomogenous layer overburden were eliminated and multi-value curve was transformed into the curve.of single value. The computation of Marmousi shows that it is feasible. Wave field continuation extends the Fresnel Zone stack's application.
Resumo:
Sound propagation in shallow water is characterized by interaction with the oceans surface, volume, and bottom. In many coastal margin regions, including the Eastern U.S. continental shelf and the coastal seas of China, the bottom is composed of a depositional sandy-silty top layer. Previous measurements of narrow and broadband sound transmission at frequencies from 100 Hz to 1 kHz in these regions are consistent with waveguide calculations based on depth and frequency dependent sound speed, attenuation and density profiles. Theoretical predictions for the frequency dependence of attenuation vary from quadratic for the porous media model of M.A. Biot to linear for various competing models. Results from experiments performed under known conditions with sandy bottoms, however, have agreed with attenuation proportional to f1.84, which is slightly less than the theoretical value of f2 [Zhou and Zhang, J. Acoust. Soc. Am. 117, 2494]. This dissertation presents a reexamination of the fundamental considerations in the Biot derivation and leads to a simplification of the theory that can be coupled with site-specific, depth dependent attenuation and sound speed profiles to explain the observed frequency dependence. Long-range sound transmission measurements in a known waveguide can be used to estimate the site-specific sediment attenuation properties, but the costs and time associated with such at-sea experiments using traditional measurement techniques can be prohibitive. Here a new measurement tool consisting of an autonomous underwater vehicle and a small, low noise, towed hydrophone array was developed and used to obtain accurate long-range sound transmission measurements efficiently and cost effectively. To demonstrate this capability and to determine the modal and intrinsic attenuation characteristics, experiments were conducted in a carefully surveyed area in Nantucket Sound. A best-fit comparison between measured results and calculated results, while varying attenuation parameters, revealed the estimated power law exponent to be 1.87 between 220.5 and 1228 Hz. These results demonstrate the utility of this new cost effective and accurate measurement system. The sound transmission results, when compared with calculations based on the modified Biot theory, are shown to explain the observed frequency dependence.
Resumo:
A problem with Speculative Concurrency Control algorithms and other common concurrency control schemes using forward validation is that committing a transaction as soon as it finishes validating, may result in a value loss to the system. Haritsa showed that by making a lower priority transaction wait after it is validated, the number of transactions meeting their deadlines is increased, which may result in a higher value-added to the system. SCC-based protocols can benefit from the introduction of such delays by giving optimistic shadows with high value-added to the system more time to execute and commit instead of being aborted in favor of other validating transactions, whose value-added to the system is lower. In this paper we present and evaluate an extension to SCC algorithms that allows for commit deferments.
Resumo:
Attributing a dollar value to a keyword is an essential part of running any profitable search engine advertising campaign. When an advertiser has complete control over the interaction with and monetization of each user arriving on a given keyword, the value of that term can be accurately tracked. However, in many instances, the advertiser may monetize arrivals indirectly through one or more third parties. In such cases, it is typical for the third party to provide only coarse-grained reporting: rather than report each monetization event, users are aggregated into larger channels and the third party reports aggregate information such as total daily revenue for each channel. Examples of third parties that use channels include Amazon and Google AdSense. In such scenarios, the number of channels is generally much smaller than the number of keywords whose value per click (VPC) we wish to learn. However, the advertiser has flexibility as to how to assign keywords to channels over time. We introduce the channelization problem: how do we adaptively assign keywords to channels over the course of multiple days to quickly obtain accurate VPC estimates of all keywords? We relate this problem to classical results in weighing design, devise new adaptive algorithms for this problem, and quantify the performance of these algorithms experimentally. Our results demonstrate that adaptive weighing designs that exploit statistics of term frequency, variability in VPCs across keywords, and flexible channel assignments over time provide the best estimators of keyword VPCs.
Resumo:
We present a procedure to infer a typing for an arbitrary λ-term M in an intersection-type system that translates into exactly the call-by-name (resp., call-by-value) evaluation of M. Our framework is the recently developed System E which augments intersection types with expansion variables. The inferred typing for M is obtained by setting up a unification problem involving both type variables and expansion variables, which we solve with a confluent rewrite system. The inference procedure is compositional in the sense that typings for different program components can be inferred in any order, and without knowledge of the definition of other program components. Using expansion variables lets us achieve a compositional inference procedure easily. Termination of the procedure is generally undecidable. The procedure terminates and returns a typing if the input M is normalizing according to call-by-name (resp., call-by-value). The inferred typing is exact in the sense that the exact call-by-name (resp., call-by-value) behaviour of M can be obtained by a (polynomial) transformation of the typing. The inferred typing is also principal in the sense that any other typing that translates the call-by-name (resp., call-by-value) evaluation of M can be obtained from the inferred typing for M using a substitution-based transformation.
Resumo:
In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We exemplify the value of our inference framework by considering example applications.
Resumo:
The pervasive use of mobile technologies has provided new opportunities for organisations to achieve competitive advantage by using a value network of partners to create value for multiple users. The delivery of a mobile payment (m-payment) system is an example of a value network as it requires the collaboration of multiple partners from diverse industries, each bringing their own expertise, motivations and expectations. Consequently, managing partnerships has been identified as a core competence required by organisations to form viable partnerships in an m-payment value network and an important factor in determining the sustainability of an m-payment business model. However, there is evidence that organisations lack this competence which has been witnessed in the m-payment domain where it has been attributed as an influencing factor in a number of failed m-payment initiatives since 2000. In response to this organisational deficiency, this research project leverages the use of design thinking and visualisation tools to enhance communication and understanding between managers who are responsible for managing partnerships within the m-payment domain. By adopting a design science research approach, which is a problem solving paradigm, the research builds and evaluates a visualisation tool in the form of a Partnership Management Canvas. In doing so, this study demonstrates that when organisations encourage their managers to adopt design thinking, as a way to balance their analytical thinking and intuitive thinking, communication and understanding between the partners increases. This can lead to a shared understanding and a shared commitment between the partners. In addition, the research identifies a number of key business model design issues that need to be considered by researchers and practitioners when designing an m-payment business model. As an applied research project, the study makes valuable contributions to the knowledge base and to the practice of management.