913 resultados para Deterministic Expander
Resumo:
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6 degrees-38 degrees N and 68 degrees-98 degrees E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1 degrees x 0.1 degrees (approximately 10 x 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Resumo:
This work presents novel achievable schemes for the 2-user symmetric linear deterministic interference channel with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. The proposed achievable scheme consists of a combination of interference cancelation, relaying of the other user's data bits, time sharing, and transmission of random bits, depending on the rate of the cooperative link and the relative strengths of the signal and the interference. The results show, for example, that the proposed scheme achieves the same rate as the capacity without the secrecy constraints, in the initial part of the weak interference regime. Also, sharing random bits through the cooperative link can achieve a higher secrecy rate compared to sharing data bits, in the very high interference regime. The results highlight the importance of limited transmitter cooperation in facilitating secure communications over 2-user interference channels.
Resumo:
This paper derives outer bounds for the 2-user symmetric linear deterministic interference channel (SLDIC) with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. Five outer bounds are derived, under different assumptions of providing side information to receivers and partitioning the encoded message/output depending on the relative strength of the signal and the interference. The usefulness of these outer bounds is shown by comparing the bounds with the inner bound on the achievable secrecy rate derived by the authors in a previous work. Also, the outer bounds help to establish that sharing random bits through the cooperative link can achieve the optimal rate in the very high interference regime.
Resumo:
Retransmission protocols such as HDLC and TCP are designed to ensure reliable communication over noisy channels (i.e., channels that can corrupt messages). Thakkar et al. 15] have recently presented an algorithmic verification technique for deterministic streaming string transducer (DSST) models of such protocols. The verification problem is posed as equivalence checking between the specification and protocol DSSTs. In this paper, we argue that more general models need to be obtained using non-deterministic streaming string transducers (NSSTs). However, equivalence checking is undecidable for NSSTs. We present two classes where the models belong to a sub-class of NSSTs for which it is decidable. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Recent studies on small-scale power generation with the organic Rankine cycle suggest superior performance of positive displacement type of expanders compared to turbines. Scroll expanders in particular achieve high isentropic efficiencies due to lower leakage and frictional losses. Performance of scroll machines may be enhanced by the use of non-circular involute curves in place of the circular involutes resulting non-uniform wall thickness. In this paper, a detailed moment analysis is performed for such an expander having volumetric expansion ratio of 5 using thermodynamic models proposed earlier by one of the present authors. The working fluid considered in the power cycle is R-245fa with scroll inlet temperature of 125 degrees C for a gross power output of similar to 3.5 kW. The model developed in this paper is verified with an air scroll compressor available in the literature and then applied to an expander Prediction of small variation of moment with scroll motion recommends use of scroll expander without a flywheel over other positive displacement type of expanders, e.g. reciprocating, where a flywheel is an essential component.
Resumo:
In this work, we study the well-known r-DIMENSIONAL k-MATCHING ((r, k)-DM), and r-SET k-PACKING ((r, k)-SP) problems. Given a universe U := U-1 ... U-r and an r-uniform family F subset of U-1 x ... x U-r, the (r, k)-DM problem asks if F admits a collection of k mutually disjoint sets. Given a universe U and an r-uniform family F subset of 2(U), the (r, k)-SP problem asks if F admits a collection of k mutually disjoint sets. We employ techniques based on dynamic programming and representative families. This leads to a deterministic algorithm with running time O(2.851((r-1)k) .vertical bar F vertical bar. n log(2)n . logW) for the weighted version of (r, k)-DM, where W is the maximum weight in the input, and a deterministic algorithm with running time O(2.851((r-0.5501)k).vertical bar F vertical bar.n log(2) n . logW) for the weighted version of (r, k)-SP. Thus, we significantly improve the previous best known deterministic running times for (r, k)-DM and (r, k)-SP and the previous best known running times for their weighted versions. We rely on structural properties of (r, k)-DM and (r, k)-SP to develop algorithms that are faster than those that can be obtained by a standard use of representative sets. Incorporating the principles of iterative expansion, we obtain a better algorithm for (3, k)-DM, running in time O(2.004(3k).vertical bar F vertical bar . n log(2)n). We believe that this algorithm demonstrates an interesting application of representative families in conjunction with more traditional techniques. Furthermore, we present kernels of size O(e(r)r(k-1)(r) logW) for the weighted versions of (r, k)-DM and (r, k)-SP, improving the previous best known kernels of size O(r!r(k-1)(r) logW) for these problems.
Resumo:
The non-deterministic relationship between Bit Error Rate and Packet Error Rate is demonstrated for an optical media access layer in common use. We show that frequency components of coded, non-random data can cause this relationship. © 2005 Optical Society of America.
Resumo:
Motivated by the observation of the rate effect on material failure, a model of nonlinear and nonlocal evolution is developed, that includes both stochastic and dynamic effects. In phase space a transitional region prevails, which distinguishes the failure behavior from a globally stable one to that of catastrophic. Several probability functions are found to characterize the distinctive features of evolution due to different degrees of nucleation, growth and coalescence rates. The results may provide a better understanding of material failure.
Resumo:
We propose here a local exponential divergence plot which is capable of providing an alternative means of characterizing a complex time series. The suggested plot defines a time-dependent exponent and a ''plus'' exponent. Based on their changes with the embedding dimension and delay time, a criterion for estimating simultaneously the minimal acceptable embedding dimension, the proper delay time, and the largest Lyapunov exponent has been obtained. When redefining the time-dependent exponent LAMBDA(k) curves on a series of shells, we have found that whether a linear envelope to the LAMBDA(k) curves exists can serve as a direct dynamical method of distinguishing chaos from noise.
Resumo:
We present a direct and dynamical method to distinguish low-dimensional deterministic chaos from noise. We define a series of time-dependent curves which are closely related to the largest Lyapunov exponent. For a chaotic time series, there exists an envelope to the time-dependent curves, while for a white noise or a noise with the same power spectrum as that of a chaotic time series, the envelope cannot be defined. When a noise is added to a chaotic time series, the envelope is eventually destroyed with the increasing of the amplitude of the noise.
Resumo:
Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.
For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.
For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.
For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.
Resumo:
We investigated four unique methods for achieving scalable, deterministic integration of quantum emitters into ultra-high Q{V photonic crystal cavities, including selective area heteroepitaxy, engineered photoemission from silicon nanostructures, wafer bonding and dimensional reduction of III-V quantum wells, and cavity-enhanced optical trapping. In these areas, we were able to demonstrate site-selective heteroepitaxy, size-tunable photoluminescence from silicon nanostructures, Purcell modification of QW emission spectra, and limits of cavity-enhanced optical trapping designs which exceed any reports in the literature and suggest the feasibility of capturing- and detecting nanostructures with dimensions below 10 nm. In addition to process scalability and the requirement for achieving accurate spectral- and spatial overlap between the emitter and cavity, these techniques paid specific attention to the ability to separate the cavity and emitter material systems in order to allow optimal selection of these independently, and eventually enable monolithic integration with other photonic and electronic circuitry.
We also developed an analytic photonic crystal design process yielding optimized cavity tapers with minimal computational effort, and reported on a general cavity modification which exhibits improved fabrication tolerance by relying exclusively on positional- rather than dimensional tapering. We compared several experimental coupling techniques for device characterization. Significant efforts were devoted to optimizing cavity fabrication, including the use of atomic layer deposition to improve surface quality, exploration into factors affecting the design fracturing, and automated analysis of SEM images. Using optimized fabrication procedures, we experimentally demonstrated 1D photonic crystal nanobeam cavities exhibiting the highest Q/V reported on substrate. Finally, we analyzed the bistable behavior of the devices to quantify the nonlinear optical response of our cavities.