100 resultados para Intervals modals
Resumo:
We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.
Resumo:
This paper presents an overview of the seismic microzonation and the grade/level based study along with methods used for estimating hazard. The principles of seismic microzonation along with some current practices are discussed. Summary of seismic microzonation experiments carried out in India is presented. A detailed work of seismic microzonation of Bangalore has been presented as a case study. In this case study, a seismotectonic map for microzonation area has been developed covering 350 km radius around Bangalore, India using seismicity and seismotectonic parameters of the region. For seismic microzonation Bangalore Mahanagar Palike (BMP) area of 220 km2 has been selected as the study area. Seismic hazard analysis has been carried out using deterministic as well as probabilistic approaches. Synthetic ground motion at 653 locations, recurrence relation and peak ground acceleration maps at rock level have been generated. A detailed site characterization has been carried out using borehole with standard penetration test (SPT) ―N‖ values and geophysical data. The base map and 3-dimensional sub surface borehole model has been generated for study area using geographical information system (GIS). Multichannel analysis of surface wave (MASW)method has been used to generate one-dimensional shear wave velocity profile at 58 locations and two- dimensional profile at 20 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 5m intervals up to a depth of 30m. Because of wider variation in the rock depth, equivalent shear for the soil overburden thickness alone has been estimated and mapped using ArcGIS 9.2. Based on equivalent shear wave velocity of soil overburden thickness, the study area is classified as ―site class D‖. Site response study has been carried out using geotechnical properties and synthetic ground motions with program SHAKE2000.The soil in the study area is classified as soil with moderate amplification potential. Site response results obtained using standard penetration test (SPT) ―N‖ values and shear wave velocity are compared, it is found that the results based on shear wave velocity is lower than the results based on SPT ―N‖ values. Further, predominant frequency of soil column has been estimated based on ambient noise survey measurements using instruments of L4-3D short period sensors equipped with Reftek 24 bit digital acquisition systems. Predominant frequency obtained from site response study is compared with ambient noise survey. In general, predominant frequencies in the study area vary from 3Hz to 12Hz. Due to flat terrain in the study area, the induced effect of land slide possibility is considered to be remote. However, induced effect of liquefaction hazard has been estimated and mapped. Finally, by integrating the above hazard parameters two hazard index maps have been developed using Analytic Hierarchy Process (AHP) on GIS platform. One map is based on deterministic hazard analysis and other map is based on probabilistic hazard analysis. Finally, a general guideline is proposed by bringing out the advantages and disadvantages of different approaches.
Resumo:
This paper presents an overview of the seismic microzonation and the grade/level based study along with methods used for estimating hazard. The principles of seismic microzonation along with some current practices are discussed. Summary of seismic microzonation experiments carried out in India is presented. A detailed work of seismic microzonation of Bangalore has been presented as a case study. In this case study, a seismotectonic map for microzonation area has been developed covering 350 km radius around Bangalore, India using seismicity and seismotectonic parameters of the region. For seismic microzonation Bangalore Mahanagar Palike (BMP) area of 220 km2 has been selected as the study area. Seismic hazard analysis has been carried out using deterministic as well as probabilistic approaches. Synthetic ground motion at 653 locations, recurrence relation and peak ground acceleration maps at rock level have been generated. A detailed site characterization has been carried out using borehole with standard penetration test (SPT) ―N‖ values and geophysical data. The base map and 3-dimensional sub surface borehole model has been generated for study area using geographical information system (GIS). Multichannel analysis of surface wave (MASW)method has been used to generate one-dimensional shear wave velocity profile at 58 locations and two- dimensional profile at 20 locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 5m intervals up to a depth of 30m. Because of wider variation in the rock depth, equivalent shear for the soil overburden thickness alone has been estimated and mapped using ArcGIS 9.2. Based on equivalent shear wave velocity of soil overburden thickness, the study area is classified as ―site class D‖. Site response study has been carried out using geotechnical properties and synthetic ground motions with program SHAKE2000.The soil in the study area is classified as soil with moderate amplification potential. Site response results obtained using standard penetration test (SPT) ―N‖ values and shear wave velocity are compared, it is found that the results based on shear wave velocity is lower than the results based on SPT ―N‖ values. Further, predominant frequency of soil column has been estimated based on ambient noise survey measurements using instruments of L4-3D short period sensors equipped with Reftek 24 bit digital acquisition systems. Predominant frequency obtained from site response study is compared with ambient noise survey. In general, predominant frequencies in the study area vary from 3Hz to 12Hz. Due to flat terrain in the study area, the induced effect of land slide possibility is considered to be remote. However, induced effect of liquefaction hazard has been estimated and mapped. Finally, by integrating the above hazard parameters two hazard index maps have been developed using Analytic Hierarchy Process (AHP) on GIS platform. One map is based on deterministic hazard analysis and other map is based on probabilistic hazard analysis. Finally, a general guideline is proposed by bringing out the advantages and disadvantages of different approaches.
Resumo:
Growing concern over the status of global and regional bioenergy resources has necessitated the analysis and monitoring of land cover and land use parameters on spatial and temporal scales. The knowledge of land cover and land use is very important in understanding natural resources utilization, conversion and management. Land cover, land use intensity and land use diversity are land quality indicators for sustainable land management. Optimal management of resources aids in maintaining the ecosystem balance and thereby ensures the sustainable development of a region. Thus sustainable development of a region requires a synoptic ecosystem approach in the management of natural resources that relates to the dynamics of natural variability and the effects of human intervention on key indicators of biodiversity and productivity. Spatial and temporal tools such as remote sensing (RS), geographic information system (GIS) and global positioning system (GPS) provide spatial and attribute data at regular intervals with functionalities of a decision support system aid in visualisation, querying, analysis, etc., which would aid in sustainable management of natural resources. Remote sensing data and GIS technologies play an important role in spatially evaluating bioresource availability and demand. This paper explores various land cover and land use techniques that could be used for bioresources monitoring considering the spatial data of Kolar district, Karnataka state, India. Slope and distance based vegetation indices are computed for qualitative and quantitative assessment of land cover using remote spectral measurements. Differentscale mapping of land use pattern in Kolar district is done using supervised classification approaches. Slope based vegetation indices show area under vegetation range from 47.65 % to 49.05% while distance based vegetation indices shoes its range from 40.40% to 47.41%. Land use analyses using maximum likelihood classifier indicate that 46.69% is agricultural land, 42.33% is wasteland (barren land), 4.62% is built up, 3.07% of plantation, 2.77% natural forest and 0.53% water bodies. The comparative analysis of various classifiers, indicate that the Gaussian maximum likelihood classifier has least errors. The computation of talukwise bioresource status shows that Chikballapur Taluk has better availability of resources compared to other taluks in the district.
Resumo:
The Indian Ocean earthquake of 26 December 2004 led to significant ground deformation in the Andaman and Nicobar region, accounting for ~800 km of the rupture. Part of this article deals with coseismic changes along these islands, observable from coastal morphology, biological indicators, and Global Positioning System (GPS) data. Our studies indicate that the islands south of 10° N latitude coseismically subsided by 1–1.5 m, both on their eastern and western margins, whereas those to the north showed a mixed response. The western margin of the Middle Andaman emerged by >1 m, and the eastern margin submerged by the same amount. In the North Andaman, both western and eastern margins emerged by >1 m. We also assess the pattern of long-term deformation (uplift/subsidence) and attempt to reconstruct earthquake/tsunami history, with the available data. Geological evidence for past submergence includes dead mangrove vegetation dating to 740 ± 100 yr B.P., near Port Blair and peat layers at 2–4 m and 10–15 m depths observed in core samples from nearby locations. Preliminary paleoseismological/tsunami evidence from the Andaman and Nicobar region and from the east coast of India, suggest at least one predecessor for the 2004 earthquake 900–1000 years ago. The history of earthquakes, although incomplete at this stage, seems to imply that the 2004-type earthquakes are infrequent and follow variable intervals
Resumo:
This paper deals with reducing the waiting times of vehicles at the traffic junctions by synchronizing the traffic signals. Strategies are suggested for betterment of the situation at different time intervals of the day, thus ensuring smooth flow of traffic. The concept of single way systems are also analyzed. The situation is simulated in Witness 2003 Simulation package using various conventions. The average waiting times are reduced by providing an optimal combination for the traffic signal timer. Different signal times are provided for different times of the day, thereby further reducing the average waiting times at specific junctions/roads according to the experienced demands.
Resumo:
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Resumo:
Thermodynamic properties of Mn3O4, Mn2O3 and MnO2 are reassessed based on new measurements and selected data from the literature. Data for these oxides are available in most thermodynamics compilations based on older calorimetric measurements on heat capacity and enthalpy of formation, and high-temperature decomposition studies. The older heat capacity measurements did not extend below 50 K. Recent measurements have extended the low temperature limit to 5 K. A reassessment of thermodynamic data was therefore undertaken, supplemented by new measurements on high temperature heat capacity of Mn3O4 and oxygen chemical potential for the oxidation of MnO1-x, Mn3O4, and Mn2O3 to their respective higher oxides using an advanced version of solid-state electrochemical cell incorporating a buffer electrode. Because of the high accuracy now achievable with solid-state electrochemical cells, phase-equilibrium calorimetry involving the ``third-law'' analysis has emerged as a competing tool to solution and combustion calorimetry for determining the standard enthalpy of formation at 298.15 K. The refined thermodynamic data for the oxides are presented in tabular form at regular intervals of temperature.
Resumo:
We present numerical studies of a model for CO oxidation on the surface of Pt(110) proposed in Ref. 1. The model shows several interesting regimes, some of which exhibit spatiotemporal chaos. The time series of the CO concentration at a given point consists of a sequence of pulses. We concentrate on interpulse intervals theta and show that their distribution P(theta) approaches a delta function continuously as the system goes from a state M, with meandering spirals, to a state S, with spatially frozen spiral cores. This should be verifiable experimentally.
Resumo:
Let G be a simple, undirected, finite graph with vertex set V (G) and edge set E(G). A k-dimensional box is a Cartesian product of closed intervals [a(1), b(1)] x [a(2), b(2)] x ... x [a(k), b(k)]. The boxicity of G, box(G), is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes; i.e., each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset, where S is the ground set and P is a reflexive, antisymmetric and transitive binary relation on S. The dimension of P, dim(P), is the minimum integer t such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P; i.e., S is the vertex set and two vertices are adjacent if and only if they are comparable in P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. It immediately follows that if P is a height-2 poset, then box(G(P)) <= dim(P) <= 2box(G(P)) since the underlying comparability graph of a height-2 poset is a bipartite graph. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with the extended double cover of G, denoted as G(c): Note that G(c) is a bipartite graph with partite sets A and B which are copies of V (G) such that, corresponding to every u is an element of V (G), there are two vertices u(A) is an element of A and u(B) is an element of B and {u(A), v(B)} is an edge in G(c) if and only if either u = v or u is adjacent to v in G. Let P(c) be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P(c)) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension such as dim(P) = 2 tree width (G(P)) + 4, since boxicity of any graph is known to be at most its tree width + 2. In the other direction, using the already known bounds for partial order dimension we get the following: (1) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta), which is an improvement over the best-known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-is an element of)) for any is an element of > 0 unless NP = ZPP.
Resumo:
This article aims to obtain damage-tolerant designs with minimum weight for a laminated composite structure using genetic algorithm. Damage tolerance due to impacts in a laminated composite structure is enhanced by dispersing the plies such that too many adjacent plies do not have the same angle. Weight of the structure is minimized and the Tsai-Wu failure criterion is considered for the safe design. Design variables considered are the number of plies and ply orientation. The influence of dispersed ply angles on the weight of the structure for a given loading conditions is studied by varying the angles in the range of 0 degrees-45 degrees, 0 degrees-60 degrees and 0 degrees-90 degrees at intervals of 5 degrees and by using specific ply angles tailored to loading conditions. A comparison study is carried out between the conventional stacking sequence and the stacking sequence with dispersed ply angles for damage-tolerant weight minimization and some useful designs are obtained. Unconventional stacking sequence is more damage tolerant than the conventional stacking sequence is demonstrated by performing a finite element analysis under both tensile as well as compressive loading conditions. Moreover, a new mathematical function called the dispersion function is proposed to measure the dispersion of ply angles in a laminate. The approach for dispersing ply angles to achieve damage tolerance is especially suited for composite material design space which has multiple local minima.
Resumo:
The flow over a truncated cone is a classical and fundamental problem for aerodynamic research due to its three-dimensional and complicated characteristics. The flow is made more complex when examining high angles of incidence. Recently these types of flows have drawn more attention for the purposes of drag reduction in supersonic/hypersonic flows. In the present study the flow over a truncated cone at various incidences was experimentally investigated in a Mach 5 flow with a unit Reynolds number of 13.5�10 6m -1. The cone semi-apex angle is 15° and the truncation ratio (truncated length/cone length) is 0.5. The incidence of the model varied from -12° to 12° with 3° intervals relative to the freestream direction. The external flow around the truncated cone was visualised by colour Schlieren photography, while the surface flow pattern was revealed using the oil flow method. The surface pressure distribution was measured using the anodized aluminium pressure-sensitive paint (AA-PSP) technique. Both top and sideviews of the pressure distribution on the model surface were acquired at various incidences. AA-PSP showed high pressure sensitivity and captured the complicated flow structures which correlated well with the colour Schlieren and oil flow visualisation results. © 2012 Elsevier Inc.
Resumo:
We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction. (c) 2012 Optical Society of America
Resumo:
In this paper, we investigate the achievable rate region of Gaussian multiple access channels (MAC) with finite input alphabet and quantized output. With finite input alphabet and an unquantized receiver, the two-user Gaussian MAC rate region was studied. In most high throughput communication systems based on digital signal processing, the analog received signal is quantized using a low precision quantizer. In this paper, we first derive the expressions for the achievable rate region of a two-user Gaussian MAC with finite input alphabet and quantized output. We show that, with finite input alphabet, the achievable rate region with the commonly used uniform receiver quantizer has a significant loss in the rate region compared. It is observed that this degradation is due to the fact that the received analog signal is densely distributed around the origin, and is therefore not efficiently quantized with a uniform quantizer which has equally spaced quantization intervals. It is also observed that the density of the received analog signal around the origin increases with increasing number of users. Hence, the loss in the achievable rate region due to uniform receiver quantization is expected to increase with increasing number of users. We, therefore, propose a novel non-uniform quantizer with finely spaced quantization intervals near the origin. For a two-user Gaussian MAC with a given finite input alphabet and low precision receiver quantization, we show that the proposed non-uniform quantizer has a significantly larger rate region compared to what is achieved with a uniform quantizer.
Resumo:
The last decade has witnessed two unusually large tsunamigenic earthquakes. The devastation from the 2004 Sumatra Andaman and the 2011 Tohoku-Oki earthquakes (both of moment magnitude >= 9.0) and their ensuing tsunamis comes as a harsh reminder on the need to assess and mitigate coastal hazards due to earthquakes and tsunamis worldwide. Along any given subduction zone, megathrust tsunamigenic earthquakes occur over intervals considerably longer than their documented histories and thus, 2004-type events may appear totally `out of the blue'. In order to understand and assess the risk from tsunamis, we need to know their long-term frequency and magnitude, going beyond documented history, to recent geological records. The ability to do this depends on our knowledge of the processes that govern subduction zones, their responses to interseismic and coseismic deformation, and on our expertise to identify and relate tsunami deposits to earthquake sources. In this article, we review the current state of understanding on the recurrence of great thrust earthquakes along global subduction zones.