956 resultados para Low Frequency Piezofilm Hydrophones


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the fracturing process in low-porous rocks during uniaxial compressive tests considering the original defects and the new mechanical cracks in the material. For this purpose, five different kinds of rocks have been chosen with carbonate mineralogy and low porosity (lower than 2%). The characterization of the fracture damage is carried out using three different techniques: ultrasounds, mercury porosimetry and X-ray computed tomography. The proposed methodology allows quantifying the evolution of the porous system as well as studying the location of new cracks in the rock samples. Intercrystalline porosity (the smallest pores with pore radius < 1 μm) shows a limited development during loading, disappearing rapidly from the porosimetry curves and it is directly related to the initial plastic behaviour in the stress–strain patterns. However, the biggest pores (corresponding to the cracks) suffer a continuous enlargement until the unstable propagation of fractures. The measured crack initiation stress varies between 0.25 σp and 0.50 σp for marbles and between 0.50 σp and 0.85 σp for micrite limestone. The unstable propagation of cracks is assumed to occur very close to the peak strength. Crack propagation through the sample is completely independent of pre-existing defects (porous bands, stylolites, fractures and veins). The ultrasonic response in the time-domain is less sensitive to the fracture damage than the frequency-domain. P-wave velocity increases during loading test until the beginning of the unstable crack propagation. This increase is higher for marbles (between 15% and 30% from initial vp values) and lower for micrite limestones (between 5% and 10%). When the mechanical cracks propagate unstably, the velocity stops to increase and decreases only when rock damage is very high. Frequency analysis of the ultrasonic signals shows clear changes during the loading process. The spectrum of treated waveforms shows two main frequency peaks centred at low (~ 20 kHz) and high (~ 35 kHz) values. When new fractures appear and grow the amplitude of the high-frequency peak decreases, while that of the low-frequency peak increases. Besides, a slight frequency shift is observed towards higher frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Senior thesis written for Oceanography 445

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choice of the operational frequency is one of the most responsible parts of any radar design process. Parameters of radars for buried object detection (BOD) are very sensitive to both carrier frequency and ranging signal bandwidth. Such radars have a specific propagation environment with a strong frequency-dependent attenuation and, as a result, short operational range. This fact dictates some features of the radar's parameters: wideband signal-to provide a high range resolution (fractions of a meter) and a low carrier frequency (tens or hundreds megahertz) for deeper penetration. The requirement to have a wideband ranging signal and low carrier frequency are partly in contradiction. As a result, low-frequency (LF) ultrawide-band (UWB) signals are used. The major goal of this paper is to examine the influence of the frequency band choice on the radar performance and develop relevant methodologies for BOD radar design and optimization. In this article, high-efficient continuous wave (CW) signals with most advanced stepped frequency (SF) modulation are considered; however, the main conclusions can be applied to any kind of ranging signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new method for characterizing the newborn heart rate variability (HRV) is proposed. The central of the method is the newly proposed technique for instantaneous frequency (IF) estimation specifically designed for nonstationary multicomponen signals such as HRV. The new method attempts to characterize the newborn HRV using features extracted from the time–frequency (TF) domain of the signal. These features comprise the IF, the instantaneous bandwidth (IB) and instantaneous energy (IE) of the different TF components of the HRV. Applied to the HRV of both normal and seizure suffering newborns, this method clearly reveals the locations of the spectral peaks and their time-varying nature. The total energy of HRV components, ET and ratio of energy concentrated in the low-frequency (LF) to that in high frequency (HF) components have been shown to be significant features in identifying the HRV of newborn with seizures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose features extracted from the heart rate variability (HRV) based on the first and second conditional moments of time-frequency distribution (TFD) as an additional guide for seizure detection in newborn. The features of HRV in the low frequency band (LF: 0-0.07 Hz), mid frequency band (MF: 0.07-0.15 Hz), and high frequency band (HF: 0.15-0.6 Hz) have been obtained by means of the time-frequency analysis using the modified-B distribution (MBD). Results of ongoing time-frequency research are presented. Based on our preliminary results, the first conditional moment of HRV which is also known as the mean/central frequency in the LF band and the second conditional moment of HRV which is also known as the variance/instantaneous bandwidth (IB) in the HF band can be used as a good feature to discriminate the newborn seizure from the non-seizure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Parkinson's disease, subthalamic nucleus (STN) neurons burst fire with increased periodicity and synchrony. This may entail abnormal release of glutamate, the major source of which in STN is cortical afferents. Indeed, the cortico-subthalamic pathway is implicated in the emergence of excessive oscillations, which are reduced, as are symptoms, by dopamine-replacement therapy or deep brain stimulation (DBS) targeted to STN. Here we hypothesize that glutamatergic synapses in the STN may be differentially modulated by low-frequency stimulation (LFS) and high-frequency stimulation (HFS), the latter mimicking deep brain stimulation. Recordings of evoked and spontaneous excitatory post synaptic currents (EPSCs) were made from STN neurons in brain slices obtained from dopamine-intact and chronically dopamine-depleted adult rats. HFS had no significant effect on evoked (e) EPSC amplitude in dopamine-intact slices (104.4±8.0%) but depressed eEPSCs in dopamine-depleted slices (67.8±6.2%). Conversely, LFS potentiated eEPSCs in dopamine-intact slices (126.4±8.1%) but not in dopamine-depleted slices (106.7±10.0%). Analyses of paired-pulse ratio, coefficient of variation, and spontaneous EPSCs suggest that the depression and potentiation have a presynaptic locus of expression. These results indicate that the synaptic efficacy in dopamine-intact tissue is enhanced by LFS. Furthermore, the synaptic efficacy in dopamine-depleted tissue is depressed by HFS. Therefore the therapeutic effects of DBS in Parkinson's disease appear mediated, in part, by glutamatergic cortico-subthalamic synaptic depression and implicate dopamine-dependent increases in the weight of glutamate synapses, which would facilitate the transfer of pathological oscillations from the cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To investigate the dynamics of communication within the primary somatosensory neuronal network. Methods: Multichannel EEG responses evoked by median nerve stimulation were recorded from six healthy participants. We investigated the directional connectivity of the evoked responses by assessing the Partial Directed Coherence (PDC) among five neuronal nodes (brainstem, thalamus and three in the primary sensorimotor cortex), which had been identified by using the Functional Source Separation (FSS) algorithm. We analyzed directional connectivity separately in the low (1-200. Hz, LF) and high (450-750. Hz, HF) frequency ranges. Results: LF forward connectivity showed peaks at 16, 20, 30 and 50. ms post-stimulus. An estimate of the strength of connectivity was modulated by feedback involving cortical and subcortical nodes. In HF, forward connectivity showed peaks at 20, 30 and 50. ms, with no apparent feedback-related strength changes. Conclusions: In this first non-invasive study in humans, we documented directional connectivity across subcortical and cortical somatosensory pathway, discriminating transmission properties within LF and HF ranges. Significance: The combined use of FSS and PDC in a simple protocol such as median nerve stimulation sheds light on how high and low frequency components of the somatosensory evoked response are functionally interrelated in sustaining somatosensory perception in healthy individuals. Thus, these components may potentially be explored as biomarkers of pathological conditions. © 2012 International Federation of Clinical Neurophysiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sequence of constant-frequency tones can promote streaming in a subsequent sequence of alternating-frequency tones, but why this effect occurs is not fully understood and its time course has not been investigated. Experiment 1 used a 2.0-s-long constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence substantially reduced reported test-sequence segregation. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets build-up, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only three tones—this contrasts with the more gradual build-up typically observed for alternating-frequency sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ~10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an ongoing, preestablished stream, and that a deviant tone may reduce segregation by disrupting this capture. These findings offer new insight into the dynamics of stream segregation, and have implications for the neural basis of streaming and the role of attention in stream formation. (PsycINFO Database Record (c) 2013 APA, all rights reserved)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sequence of constant-frequency tones can promote streaming in a subsequent sequence of alternating-frequency tones, but why this effect occurs is not fully understood and its time course has not been investigated. Experiment 1 used a 2.0-s-long constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence substantially reduced reported test-sequence segregation. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets build-up, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only three tones—this contrasts with the more gradual build-up typically observed for alternating-frequency sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ~10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an ongoing, preestablished stream, and that a deviant tone may reduce segregation by disrupting this capture. These findings offer new insight into the dynamics of stream segregation, and have implications for the neural basis of streaming and the role of attention in stream formation. (PsycINFO Database Record (c) 2013 APA, all rights reserved)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three experiments investigated the dynamics of auditory stream segregation. Experiment 1 used a 2.0-s constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence reduced reported test-sequence segregation substantially. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets buildup, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only 3 tone cycles - this contrasts with the more gradual build-up typically observed for alternating sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ∼10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an on-going, pre-established stream and that a deviant tone may reduce segregation by disrupting this capture. © 2013 Acoustical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low-rise buildings are often subjected to high wind loads during hurricanes that lead to severe damage and cause water intrusion. It is therefore important to estimate accurate wind pressures for design purposes to reduce losses. Wind loads on low-rise buildings can differ significantly depending upon the laboratory in which they were measured. The differences are due in large part to inadequate simulations of the low-frequency content of atmospheric velocity fluctuations in the laboratory and to the small scale of the models used for the measurements. A new partial turbulence simulation methodology was developed for simulating the effect of low-frequency flow fluctuations on low-rise buildings more effectively from the point of view of testing accuracy and repeatability than is currently the case. The methodology was validated by comparing aerodynamic pressure data for building models obtained in the open-jet 12-Fan Wall of Wind (WOW) facility against their counterparts in a boundary-layer wind tunnel. Field measurements of pressures on Texas Tech University building and Silsoe building were also used for validation purposes. The tests in partial simulation are freed of integral length scale constraints, meaning that model length scales in such testing are only limited by blockage considerations. Thus the partial simulation methodology can be used to produce aerodynamic data for low-rise buildings by using large-scale models in wind tunnels and WOW-like facilities. This is a major advantage, because large-scale models allow for accurate modeling of architectural details, testing at higher Reynolds number, using greater spatial resolution of the pressure taps in high pressure zones, and assessing the performance of aerodynamic devices to reduce wind effects. The technique eliminates a major cause of discrepancies among measurements conducted in different laboratories and can help to standardize flow simulations for testing residential homes as well as significantly improving testing accuracy and repeatability. Partial turbulence simulation was used in the WOW to determine the performance of discontinuous perforated parapets in mitigating roof pressures. The comparisons of pressures with and without parapets showed significant reductions in pressure coefficients in the zones with high suctions. This demonstrated the potential of such aerodynamic add-on devices to reduce uplift forces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The service of a critical infrastructure, such as a municipal wastewater treatment plant (MWWTP), is taken for granted until a flood or another low frequency, high consequence crisis brings its fragility to attention. The unique aspects of the MWWTP call for a method to quantify the flood stage-duration-frequency relationship. By developing a bivariate joint distribution model of flood stage and duration, this study adds a second dimension, time, into flood risk studies. A new parameter, inter-event time, is developed to further illustrate the effect of event separation on the frequency assessment. The method is tested on riverine, estuary and tidal sites in the Mid-Atlantic region. Equipment damage functions are characterized by linear and step damage models. The Expected Annual Damage (EAD) of the underground equipment is further estimated by the parametric joint distribution model, which is a function of both flood stage and duration, demonstrating the application of the bivariate model in risk assessment. Flood likelihood may alter due to climate change. A sensitivity analysis method is developed to assess future flood risk by estimating flood frequency under conditions of higher sea level and stream flow response to increased precipitation intensity. Scenarios based on steady and unsteady flow analysis are generated for current climate, future climate within this century, and future climate beyond this century, consistent with the WWTP planning horizons. The spatial extent of flood risk is visualized by inundation mapping and GIS-Assisted Risk Register (GARR). This research will help the stakeholders of the critical infrastructure be aware of the flood risk, vulnerability, and the inherent uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Doutoramento em Economia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).