136 resultados para deterministic fractals


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic hazard value of any region depends upon three important components such as probable earthquake location, maximum earthquake magnitude and the attenuation equation. This paper presents a representative way of estimating these three important components considering region specific seismotectonic features. Rupture Based Seismic Hazard Analysis (RBSHA) given by Anbazhagan et al. (2011) is used to determine the probable future earthquake locations. This approach is verified on the earthquake data of Bhuj region. The probable earthquake location for this region is identified considering earthquake data till the year 2000. These identified locations match well with the reported locations after 2000. The further Coimbatore City is selected as the study area to develop a representative seismic hazard map using RBSHA approach and to compare with deterministic seismic hazard analysis. Probable future earthquake zones for Coimbatore are located considering the rupture phenomenon as per energy release theory discussed by Anbazhagan et at (2011). Rupture character of the region has been established by estimating the subsurface rupture length of each source and normalized with respect to the length of the source. Average rupture length of the source with respect to its total length is found to be similar for most of the sources in the region, which is called as the rupture character of the region. Maximum magnitudes of probable zones are estimated considering seismic sources close by and regional rupture character established. Representative GMPEs for the study area have been selected by carrying out efficacy test through an average log likelihood value (LLH) as ranking estimator and considering the Isoseismal map. New seismic hazard map of Coimbatore has been developed using the above regional representative parameters of probable earthquake locations, maximum earthquake magnitude and best suitable GMPEs. The new hazard map gives acceleration values at bedrock for maximum possible earthquakes. These results are compared with deterministic seismic hazard map and recently published probabilistic seismic hazard values. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Landslide hazards are a major natural disaster that affects most of the hilly regions around the world. In India, significant damages due to earthquake induced landslides have been reported in the Himalayan region and also in the Western Ghat region. Thus there is a requirement of a quantitative macro-level landslide hazard assessment within the Indian subcontinent in order to identify the regions with high hazard. In the present study, the seismic landslide hazard for the entire state of Karnataka, India was assessed using topographic slope map, derived from the Digital Elevation Model (DEM) data. The available ASTER DEM data, resampled to 50 m resolution, was used for deriving the slope map of the entire state. Considering linear source model, deterministic seismic hazard analysis was carried out to estimate peak horizontal acceleration (PHA) at bedrock, for each of the grid points having terrain angle 10A degrees and above. The surface level PHA was estimated using nonlinear site amplification technique, considering B-type NEHRP site class. Based on the surface level PHA and slope angle, the seismic landslide hazard for each grid point was estimated in terms of the static factor of safety required to resist landslide, using Newmark's analysis. The analysis was carried out at the district level and the landslide hazard map for all the districts in the Karnataka state was developed first. These were then merged together to obtain a quantitative seismic landslide hazard map of the entire state of Karnataka. Spatial variations in the landslide hazard for all districts as well as for the entire state Karnataka is presented in this paper. The present study shows that the Western Ghat region of the Karnataka state is found to have high landslide hazard where the static factor of safety required to resist landslide is very high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Cubic Sieve Method for solving the Discrete Logarithm Problem in prime fields requires a nontrivial solution to the Cubic Sieve Congruence (CSC) x(3) equivalent to y(2)z (mod p), where p is a given prime number. A nontrivial solution must also satisfy x(3) not equal y(2)z and 1 <= x, y, z < p(alpha), where alpha is a given real number such that 1/3 < alpha <= 1/2. The CSC problem is to find an efficient algorithm to obtain a nontrivial solution to CSC. CSC can be parametrized as x equivalent to v(2)z (mod p) and y equivalent to v(3)z (mod p). In this paper, we give a deterministic polynomial-time (O(ln(3) p) bit-operations) algorithm to determine, for a given v, a nontrivial solution to CSC, if one exists. Previously it took (O) over tilde (p(alpha)) time in the worst case to determine this. We relate the CSC problem to the gap problem of fractional part sequences, where we need to determine the non-negative integers N satisfying the fractional part inequality {theta N} < phi (theta and phi are given real numbers). The correspondence between the CSC problem and the gap problem is that determining the parameter z in the former problem corresponds to determining N in the latter problem. We also show in the alpha = 1/2 case of CSC that for a certain class of primes the CSC problem can be solved deterministically in <(O)over tilde>(p(1/3)) time compared to the previous best of (O) over tilde (p(1/2)). It is empirically observed that about one out of three primes is covered by the above class. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop iterative diffraction tomography algorithms, which are similar to the distorted Born algorithms, for inverting scattered intensity data. Within the Born approximation, the unknown scattered field is expressed as a multiplicative perturbation to the incident field. With this, the forward equation becomes stable, which helps us compute nearly oscillation-free solutions that have immediate bearing on the accuracy of the Jacobian computed for use in a deterministic Gauss-Newton (GN) reconstruction. However, since the data are inherently noisy and the sensitivity of measurement to refractive index away from the detectors is poor, we report a derivative-free evolutionary stochastic scheme, providing strictly additive updates in order to bridge the measurement-prediction misfit, to arrive at the refractive index distribution from intensity transport data. The superiority of the stochastic algorithm over the GN scheme for similar settings is demonstrated by the reconstruction of the refractive index profile from simulated and experimentally acquired intensity data. (C) 2014 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A link level reliable multicast requires a channel access protocol to resolve the collision of feedback messages sent by multicast data receivers. Several deterministic media access control protocols have been proposed to attain high reliability, but with large delay. Besides, there are also protocols which can only give probabilistic guarantee about reliability, but have the least delay. In this paper, we propose a virtual token-based channel access and feedback protocol (VTCAF) for link level reliable multicasting. The VTCAF protocol introduces a virtual (implicit) token passing mechanism based on carrier sensing to avoid the collision between feedback messages. The delay performance is improved in VTCAF protocol by reducing the number of feedback messages. Besides, the VTCAF protocol is parametric in nature and can easily trade off reliability with the delay as per the requirement of the underlying application. Such a cross layer design approach would be useful for a variety of multicast applications which require reliable communication with different levels of reliability and delay performance. We have analyzed our protocol to evaluate various performance parameters at different packet loss rate and compared its performance with those of others. Our protocol has also been simulated using Castalia network simulator to evaluate the same performance parameters. Simulation and analytical results together show that the VTCAF protocol is able to considerably reduce average access delay while ensuring very high reliability at the same time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Programming environments for smartphones expose a concurrency model that combines multi-threading and asynchronous event-based dispatch. While this enables the development of efficient and feature-rich applications, unforeseen thread interleavings coupled with non-deterministic reorderings of asynchronous tasks can lead to subtle concurrency errors in the applications. In this paper, we formalize the concurrency semantics of the Android programming model. We further define the happens-before relation for Android applications, and develop a dynamic race detection technique based on this relation. Our relation generalizes the so far independently studied happens-before relations for multi-threaded programs and single-threaded event-driven programs. Additionally, our race detection technique uses a model of the Android runtime environment to reduce false positives. We have implemented a tool called DROIDRACER. It generates execution traces by systematically testing Android applications and detects data races by computing the happens-before relation on the traces. We analyzed 1 5 Android applications including popular applications such as Facebook, Twitter and K-9 Mail. Our results indicate that data races are prevalent in Android applications, and that DROIDRACER is an effective tool to identify data races.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of delay-constrained, energy-efficient broadcast in cooperative wireless networks is NP-complete. While centralised setting allows some heuristic solutions, designing heuristics in distributed implementation poses significant challenges. This is more so in wireless sensor networks (WSNs) where nodes are deployed randomly and topology changes dynamically due to node failure/join and environment conditions. This paper demonstrates that careful design of network infrastructure can achieve guaranteed delay bounds and energy-efficiency, and even meet quality of service requirements during broadcast. The paper makes three prime contributions. First, we present an optimal lower bound on energy consumption for broadcast that is tighter than what has been previously proposed. Next, iSteiner, a lightweight, distributed and deterministic algorithm for creation of network infrastructure is discussed. iPercolate is the algorithm that exploits this structure to cooperatively broadcast information with guaranteed delivery and delay bounds, while allowing real-time traffic to pass undisturbed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the cities in India are undergoing rapid development in recent decades, and many rural localities are undergoing transformation to urban hotspots. These developments have associated land use/land cover (LULC) change that effects runoff response from catchments, which is often evident in the form of increase in runoff peaks, volume and velocity in drain network. Often most of the existing storm water drains are in dilapidated stage owing to improper maintenance or inadequate design. The drains are conventionally designed using procedures that are based on some anticipated future conditions. Further, values of parameters/variables associated with design of the network are traditionally considered to be deterministic. However, in reality, the parameters/variables have uncertainty due to natural and/or inherent randomness. There is a need to consider the uncertainties for designing a storm water drain network that can effectively convey the discharge. The present study evaluates performance of an existing storm water drain network in Bangalore, India, through reliability analysis by Advance First Order Second Moment (AFOSM) method. In the reliability analysis, parameters that are considered to be random variables are roughness coefficient, slope and conduit dimensions. Performance of the existing network is evaluated considering three failure modes. The first failure mode occurs when runoff exceeds capacity of the storm water drain network, while the second failure mode occurs when the actual flow velocity in the storm water drain network exceeds the maximum allowable velocity for erosion control, whereas the third failure mode occurs when the minimum flow velocity is less than the minimum allowable velocity for deposition control. In the analysis, runoff generated from subcatchments of the study area and flow velocity in storm water drains are estimated using Storm Water Management Model (SWMM). Results from the study are presented and discussed. The reliability values are low under the three failure modes, indicating a need to redesign several of the conduits to improve their reliability. This study finds use in devising plans for expansion of the Bangalore storm water drain system. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A closed-form expression for the dual of dissipation potential is derived within the framework of irreversible thermodynamics using the principles of dimensional analysis and self-similarity. Through this potential, a damage evolution law is proposed for concrete under fatigue loading using the concepts of damage mechanics in conjunction with fracture mechanics. The proposed law is used to compute damage in a volume element when a member is subjected to fatigue loading. The evolution of damage from microcracking to macrocracking of the entire member is captured through a series of volume elements failing one after the other. The number of loading cycles to failure of the member is obtained as the summation of number of cycles to failure for each individual volume element. A parametric study is conducted to determine the effect of the size of the volume element on the model's prediction of fatigue life. A global damage index is also defined, and the residual moment carrying capacity of damaged beams is evaluated. Through a deterministic sensitivity analysis, it is found that the load range and maximum aggregate size are the most influencing parameters on the fatigue life of a plain concrete beam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper was to develop the seismic hazard maps of Patna district considering the region-specific maximum magnitude and ground motion prediction equation (GMPEs) by worst-case deterministic and classical probabilistic approaches. Patna, located near Himalayan active seismic region has been subjected to destructive earthquakes such as 1803 and 1934 Bihar-Nepal earthquakes. Based on the past seismicity and earthquake damage distribution, linear sources and seismic events have been considered at radius of about 500 km around Patna district center. Maximum magnitude (M (max)) has been estimated based on the conventional approaches such as maximum observed magnitude (M (max) (obs) ) and/or increment of 0.5, Kijko method and regional rupture characteristics. Maximum of these three is taken as maximum probable magnitude for each source. Twenty-seven ground motion prediction equations (GMPEs) are found applicable for Patna region. Of these, suitable region-specific GMPEs are selected by performing the `efficacy test,' which makes use of log-likelihood. Maximum magnitude and selected GMPEs are used to estimate PGA and spectral acceleration at 0.2 and 1 s and mapped for worst-case deterministic approach and 2 and 10 % period of exceedance in 50 years. Furthermore, seismic hazard results are used to develop the deaggregation plot to quantify the contribution of seismic sources in terms of magnitude and distance. In this study, normalized site-specific design spectrum has been developed by dividing the hazard map into four zones based on the peak ground acceleration values. This site-specific response spectrum has been compared with recent Sikkim 2011 earthquake and Indian seismic code IS1893.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a macro-level seismic landslide hazard assessment for the entire state of Sikkim, India, based on the Newmark's methodology. The slope map of Sikkim was derived from ASTER Global Digital Elevation Model (GDEM). Seismic shaking in terms of peak horizontal acceleration (PHA) at bedrock level was estimated from deterministic seismic hazard analysis (DSHA), considering point source model. Peak horizontal acceleration at the surface level for the study area was estimated based on nonlinear site amplification technique, considering B-type NEHRP site class. The PHA at surface was considered to induce driving forces on slopes, thus causing landslides. Knowing the surface level PHA and slope angle, the seismic landslide hazard assessment for each grid point was carried out using Newmark's analysis. The critical static factor of safety required to resist landslide for the PHA (obtained from deterministic analysis) was evaluated and its spatial variation throughout the study area is presented. For any slope in the study area, if the in-situ (available) static factor of safety is greater than the static factor of safety required to resist landslide as predicted in the present study, that slope is considered to be safe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.