980 resultados para Source wavelet estimation
Resumo:
In the deep-sea, the Paleocene-Eocene Thermal Maximum (PETM) is often marked by clay-rich condensed intervals caused by dissolution of carbonate sediments, capped by a carbonate-rich interval. Constraining the duration of both the dissolution and subsequent cap-carbonate intervals is essential to computing marine carbon fluxes and thus testing hypotheses for the origin of this event. To this end, we provide new high-resolution helium isotope records spanning the Paleocene-Eocene boundary at ODP Site 1266 in the South Atlantic. The extraterrestrial 3He, 3HeET, concentrations replicate trends observed at ODP Site 690 by Farley and Eltgroth (2003, doi:10.1016/S0012-821X(03)00017-7). By assuming a constant flux of 3HeET we constrain relative changes in accumulation rates of sediment across the PETM and construct a new age model for the event. In this new chronology the zero carbonate layer represents 35 kyr, some of which reflects clay produced by dissolution of Paleocene (pre-PETM) sediments. Above this layer, carbonate concentrations increase for ~165 kyr and remain higher than in the latest Paleocene until 234 +48/-34 kyr above the base of the clay. The new chronology indicates that minimum d13C values persisted for a maximum of 134 +27/-19 kyr and the inflection point previously chosen to designate the end of the CIE recovery occurs at 217 +44/-31 kyr. This allocation of time differs from that of the cycle-based age model of Röhl et al. (2007, doi:10.1029/2007GC001784) in that it assigns more time to the clay layer followed by a more gradual recovery of carbonate-rich sedimentation. The new model also suggests a longer sustained d13C excursion followed by a more rapid recovery to pre-PETM d13C values. These differences have important implications for constraining the source(s) of carbon and mechanisms for its subsequent sequestration, favoring models that include a sustained release
Resumo:
The study of continuously varying, quantitative traits is important in evolutionary biology, agriculture, and medicine. Variation in such traits is attributable to many, possibly interacting, genes whose expression may be sensitive to the environment, which makes their dissection into underlying causative factors difficult. An important population parameter for quantitative traits is heritability, the proportion of total variance that is due to genetic factors. Response to artificial and natural selection and the degree of resemblance between relatives are all a function of this parameter. Following the classic paper by R. A. Fisher in 1918, the estimation of additive and dominance genetic variance and heritability in populations is based upon the expected proportion of genes shared between different types of relatives, and explicit, often controversial and untestable models of genetic and non-genetic causes of family resemblance. With genome-wide coverage of genetic markers it is now possible to estimate such parameters solely within families using the actual degree of identity-by-descent sharing between relatives. Using genome scans on 4,401 quasi-independent sib pairs of which 3,375 pairs had phenotypes, we estimated the heritability of height from empirical genome-wide identity-by-descent sharing, which varied from 0.374 to 0.617 (mean 0.498, standard deviation 0.036). The variance in identity-by-descent sharing per chromosome and per genome was consistent with theory. The maximum likelihood estimate of the heritability for height was 0.80 with no evidence for non-genetic causes of sib resemblance, consistent with results from independent twin and family studies but using an entirely separate source of information. Our application shows that it is feasible to estimate genetic variance solely from within- family segregation and provides an independent validation of previously untestable assumptions. Given sufficient data, our new paradigm will allow the estimation of genetic variation for disease susceptibility and quantitative traits that is free from confounding with non-genetic factors and will allow partitioning of genetic variation into additive and non-additive components.
Resumo:
Irrigation practices that are profligate in their use of water have come under closer scrutiny by water managers and the public. Trickle irrigation has the propensity to increase water use efficiency but only if the system is designed to meet the soil and plant conditions. Recently we have provided a software tool, WetUp (http://www.clw.csiro.au/products/wetup/), to calculate the wetting patterns from trickle irrigation emitters. WetUp uses an analytical solution to calculate the wetted perimeter for both buried and surface emitters. This analytical solution has a number of assumptions, two of which are that the wetting front is defined by water content at which the hydraulic conductivity (K) is I mm day(-1) and that the flow occurs from a point source. Here we compare the wetting patterns calculated with a 2-dimensional numerical model, HYDRUS2D, for solving the water flow into typical soils with the analytical solution. The results show that the wetting patterns are similar, except when the soil properties result in the assumption of a point source no longer being a good description of the flow regime. Difficulties were also experienced with getting stable solutions with HYDRUS2D for soils with low hydraulic conductivities. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This paper examines the source country determinants of FDI into Japan. The paper highlights certain methodological and theoretical weaknesses in the previous literature and offers some explanations for hitherto ambiguous results. Specifically, the paper highlights the importance of panel data analysis, and the identification of fixed effects in the analysis rather than simply pooling the data. Indeed, we argue that many of the results reported elsewhere are a feature of this mis-specification. To this end, pooled, fixed effects and random effects estimates are compared. The results suggest that FDI into Japan is inversely related to trade flows, such that trade and FDI are substitutes. Moreover, the results also suggest that FDI increases with home country political and economic stability. The paper also shows that previously reported results, regarding the importance of exchange rates, relative borrowing costs and labour costs in explaining FDI flows, are sensitive to the econometric specification and estimation approach. The paper also discusses the importance of these results within a policy context. In recent years Japan has sought to attract FDI, though many firms still complain of barriers to inward investment penetration in Japan. The results show that cultural and geographic distance are only of marginal importance in explaining FDI, and that the results are consistent with the market-seeking explanation of FDI. As such, the attitude to risk in the source country is strongly related to the size of FDI flows to Japan. © 2007 The Authors Journal compilation © 2007 Blackwell Publishing Ltd.
Resumo:
Objective of this work was to explore the performance of a recently introduced source extraction method, FSS (Functional Source Separation), in recovering induced oscillatory change responses from extra-cephalic magnetoencephalographic (MEG) signals. Unlike algorithms used to solve the inverse problem, FSS does not make any assumption about the underlying biophysical source model; instead, it makes use of task-related features (functional constraints) to estimate source/s of interest. FSS was compared with blind source separation (BSS) approaches such as Principal and Independent Component Analysis, PCA and ICA, which are not subject to any explicit forward solution or functional constraint, but require source uncorrelatedness (PCA), or independence (ICA). A visual MEG experiment with signals recorded from six subjects viewing a set of static horizontal black/white square-wave grating patterns at different spatial frequencies was analyzed. The beamforming technique Synthetic Aperture Magnetometry (SAM) was applied to localize task-related sources; obtained spatial filters were used to automatically select BSS and FSS components in the spatial area of interest. Source spectral properties were investigated by using Morlet-wavelet time-frequency representations and significant task-induced changes were evaluated by means of a resampling technique; the resulting spectral behaviours in the gamma frequency band of interest (20-70 Hz), as well as the spatial frequency-dependent gamma reactivity, were quantified and compared among methods. Among the tested approaches, only FSS was able to estimate the expected sustained gamma activity enhancement in primary visual cortex, throughout the whole duration of the stimulus presentation for all subjects, and to obtain sources comparable to invasively recorded data.
Resumo:
Distributed source coding (DSC) has recently been considered as an efficient approach to data compression in wireless sensor networks (WSN). Using this coding method multiple sensor nodes compress their correlated observations without inter-node communications. Therefore energy and bandwidth can be efficiently saved. In this paper, we investigate a randombinning based DSC scheme for remote source estimation in WSN and its performance of estimated signal to distortion ratio (SDR). With the introduction of a detailed power consumption model for wireless sensor communications, we quantitatively analyze the overall network energy consumption of the DSC scheme. We further propose a novel energy-aware transmission protocol for the DSC scheme, which flexibly optimizes the DSC performance in terms of either SDR or energy consumption, by adapting the source coding and transmission parameters to the network conditions. Simulations validate the energy efficiency of the proposed adaptive transmission protocol. © 2007 IEEE.
Resumo:
In this talk we investigate the usage of spectrally shaped amplified spontaneous emission (ASE) in order to emulate highly dispersed wavelength division multiplexed (WDM) signals in an optical transmission system. Such a technique offers various simplifications to large scale WDM experiments. Not only does it offer a reduction in transmitter complexity, removing the need for multiple source lasers, it potentially reduces the test and measurement complexity by requiring only the centre channel of a WDM system to be measured in order to estimate WDM worst case performance. The use of ASE as a test and measurement tool is well established in optical communication systems and several measurement techniques will be discussed [1, 2]. One of the most prevalent uses of ASE is in the measurement of receiver sensitivity where ASE is introduced in order to degrade the optical signal to noise ratio (OSNR) and measure the resulting bit error rate (BER) at the receiver. From an analytical point of view noise has been used to emulate system performance, the Gaussian Noise model is used as an estimate of highly dispersed signals and has had consider- able interest [3]. The work to be presented here extends the use of ASE by using it as a metric to emulate highly dispersed WDM signals and in the process reduce WDM transmitter complexity and receiver measurement time in a lab environment. Results thus far have indicated [2] that such a transmitter configuration is consistent with an AWGN model for transmission, with modulation format complexity and nonlinearities playing a key role in estimating the performance of systems utilising the ASE channel emulation technique. We conclude this work by investigating techniques capable of characterising the nonlinear and damage limits of optical fibres and the resultant information capacity limits. REFERENCES McCarthy, M. E., N. Mac Suibhne, S. T. Le, P. Harper, and A. D. Ellis, “High spectral efficiency transmission emulation for non-linear transmission performance estimation for high order modulation formats," 2014 European Conference on IEEE Optical Communication (ECOC), 2014. 2. Ellis, A., N. Mac Suibhne, F. Gunning, and S. Sygletos, “Expressions for the nonlinear trans- mission performance of multi-mode optical fiber," Opt. Express, Vol. 21, 22834{22846, 2013. Vacondio, F., O. Rival, C. Simonneau, E. Grellier, A. Bononi, L. Lorcy, J. Antona, and S. Bigo, “On nonlinear distortions of highly dispersive optical coherent systems," Opt. Express, Vol. 20, 1022-1032, 2012.
Resumo:
This dissertation presents a unique research opportunity by using recordings which provide electrocardiogram (ECG) plus a reference breathing signal (RBS). ECG derived breathing (EDR) is measured and correlated against RBS. Standard deviations of multiresolution wavelet analysis coefficients (SDMW) are obtained from heart rate and classified using RBS. Prior works by others used select patients for sleep apnea scoring with EDR but no RBS. Another prior work classified select heart disease patients with SDMW but no RBS. This study used randomly chosen sleep disorder patient recordings; central and obstructive apneas, with and without heart disease.^ Implementation required creating an application because existing systems were limited in power and scope. A review survey was created to choose a development environment. The survey is presented as a learning tool and teaching resource. Development objectives were rapid development using limited resources (manpower and money). Open Source resources were used exclusively for implementation. ^ Results show: (1) Three groups of patients exist in the study. Grouping RBS correlations shows a response with either ECG interval or amplitude variation. A third group exists where neither ECG intervals nor amplitude variation correlate with breathing. (2) Previous work done by other groups analyzed SDMW. Similar results were found in this study but some subjects had higher SDMW, attributed to a large number of apneas, arousals and/or disconnects. SDMW does not need RBS to show apneic conditions exist within ECG recordings. (3) Results in this study support the assertion that autonomic nervous system variation was measured with SDMW. Measurements using RBS are not corrupted due to breathing even though respiration overlaps the same frequency band.^ Overall, this work becomes an Open Source resource which can be reused, modified and/or expanded. It might fast track additional research. In the future the system could also be used for public domain data. Prerecorded data exist in similar formats in public databases which could provide additional research opportunities. ^
Resumo:
This study focuses on quantifying explicitly the sediment budget of deeply incised ravines in the lower Le Sueur River watershed, in southern Minnesota. High-rate-gully-erosion equations along with the Universal Soil Loss Equation (USLE) were implemented in a numerical modeling approach that is based on a time-integration of the sediment balance equations. The model estimates the rates of ravine width and depth change and the amount of sediment periodically flushing from the ravines. Components of the sediment budget of the ravines were simulated with the model and results suggest that the ravine walls are the major sediment source in the ravines. A sensitivity analysis revealed that the erodibility coefficients of the gully bed and wall, the local slope angle and the Manning’s coefficient are the key parameters controlling the rate of sediment production. Recommendations to guide further monitoring efforts in the watershed and increased detail modeling approaches are highlighted as a result of this modeling effort.
Resumo:
This study focuses on quantifying explicitly the sediment budget of deeply incised ravines in the lower Le Sueur River watershed, in southern Minnesota. High-rate-gully-erosion equations along with the Universal Soil Loss Equation (USLE) were implemented in a numerical modeling approach that is based on a time-integration of the sediment balance equations. The model estimates the rates of ravine width and depth change and the amount of sediment periodically flushing from the ravines. Components of the sediment budget of the ravines were simulated with the model and results suggest that the ravine walls are the major sediment source in the ravines. A sensitivity analysis revealed that the erodibility coefficients of the gully bed and wall, the local slope angle and the Manning’s coefficient are the key parameters controlling the rate of sediment production. Recommendations to guide further monitoring efforts in the watershed and increased detail modeling approaches are highlighted as a result of this modeling effort.
Resumo:
The goal of the power monitoring in electrical power systems is to promote the reliablility as well as the quality of electrical power.Therefore, this dissertation proposes a new theory of power based on wavelet transform for real-time estimation of RMS voltages and currents, and some power amounts, such as active power, reactive power, apparent power, and power factor. The appropriate estimation the of RMS and power values is important for many applications, such as: design and analysis of power systems, compensation devices for improving power quality, and instruments for energy measuring. Simulation and experimental results obtained through the proposed MaximalOverlap Discrete Wavelet Transform-based method were compared with the IEEE Standard 1459-2010 and the commercial oscilloscope, respectively, presenting equivalent results. The proposed method presented good performance for compact mother wavelet, which is in accordance with real-time applications.
Resumo:
The quantitative diatom analysis of 218 surface sediment samples recovered in the Atlantic and western Indian sector of the Southern Ocean is used to define a base of reference data for paleotemperature estimations from diatom assemblages using the Imbrie and Kipp transfer function method. The criteria which justify the exclusion of samples and species out of the raw data set in order to define a reference database are outlined and discussed. Sensitivity tests with eight data sets were achieved evaluating the effects of overall dominance of single species, different methods of species abundance ranking, and no-analog conditions (e.g., Eucampia Antarctica) on the estimated paleotemperatures. The defined transfer functions were applied on a sediment core from the northern Antarctic zone. Overall dominance of Fragilariopsis kerguelensis in the diatom assemblages resulted in a close affinity between paleotemperature curve and relative abundance pattern of this species downcore. Logarithmic conversion of counting data applied with other ranking methods in order to compensate the dominance of F. kerguelensis revealed the best statistical results. A reliable diatom transfer function for future paleotemperature estimations is presented.
Resumo:
ODP Site 1089 is optimally located in order to monitor the occurrence of maxima in Agulhas heat and salt spillage from the Indian to the Atlantic Ocean. Radiolarian-based paleotemperature transfer functions allowed to reconstruct the climatic history for the last 450 kyr at this location. A warm sea surface temperature anomaly during Marine Isotope Stage (MIS) 10 was recognized and traced to other oceanic records along the surface branch of the global thermohaline (THC) circulation system, and is particularly marked at locations where a strong interaction between oceanic and atmospheric overturning cells and fronts occurs. This anomaly is absent in the Vostok ice core deuterium, and in oceanic records from the Antarctic Zone. However, it is present in the deuterium excess record from the Vostok ice core, interpreted as reflecting the temperature at the moisture source site for the snow precipitated at Vostok Station. As atmospheric models predict a subtropical Indian source for such moisture, this provides the necessary teleconnection between East Antarctica and ODP Site 1089, as the subtropical Indian is also the source area of the Agulhas Current, the main climate agent at our study location. The presence of the MIS 10 anomaly in the delta13C foraminiferal records from the same core supports its connection to oceanic mechanisms, linking stronger Agulhas spillover intensity to increased productivity in the study area. We suggest, in analogy to modern oceanographic observations, this to be a consequence of a shallow nutricline, induced by eddy mixing and baroclinic tide generation, which are in turn connected to the flow geometry, and intensity, of the Agulhas Current as it flows past the Agulhas Bank. We interpret the intensified inflow of Agulhas Current to the South Atlantic as responding to the switch between lower and higher amplitude in the insolation forcing in the Agulhas Current source area. This would result in higher SSTs in the Cape Basin during the glacial MIS 10, due to the release into the South Atlantic of the heat previously accumulating in the subtropical and equatorial Indian and Pacific Ocean. If our explanation for the MIS 10 anomaly in terms of an insolation variability switch is correct, we might expect that a future Agulhas SSST anomaly event will further delay the onset of next glacial age. In fact, the insolation forcing conditions for the Holocene (the current interglacial) are very similar to those present during MIS 11 (the interglacial preceding MIS 10), as both periods are characterized by a low insolation variability for the Agulhas Current source area. Natural climatic variability will force the Earth system in the same direction as the anthropogenic global warming trend, and will thus lead to even warmer than expected global temperatures in the near future.
Resumo:
Periods of drought and low streamflow can have profound impacts on both human and natural systems. People depend on a reliable source of water for numerous reasons including potable water supply and to produce economic value through agriculture or energy production. Aquatic ecosystems depend on water in addition to the economic benefits they provide to society through ecosystem services. Given that periods of low streamflow may become more extreme and frequent in the future, it is important to study the factors that control water availability during these times. In the absence of precipitation the slower hydrological response of groundwater systems will play an amplified role in water supply. Understanding the variability of the fraction of streamflow contribution from baseflow or groundwater during periods of drought provides insight into what future water availability may look like and how it can best be managed. The Mills River Basin in North Carolina is chosen as a case-study to test this understanding. First, obtaining a physically meaningful estimation of baseflow from USGS streamflow data via computerized hydrograph analysis techniques is carried out. Then applying a method of time series analysis including wavelet analysis can highlight signals of non-stationarity and evaluate the changes in variance required to better understand the natural variability of baseflow and low flows. In addition to natural variability, human influence must be taken into account in order to accurately assess how the combined system reacts to periods of low flow. Defining a combined demand that consists of both natural and human demand allows us to be more rigorous in assessing the level of sustainable use of a shared resource, in this case water. The analysis of baseflow variability can differ based on regional location and local hydrogeology, but it was found that baseflow varies from multiyear scales such as those associated with ENSO (3.5, 7 years) up to multi decadal time scales, but with most of the contributing variance coming from decadal or multiyear scales. It was also found that the behavior of baseflow and subsequently water availability depends a great deal on overall precipitation, the tracks of hurricanes or tropical storms and associated climate indices, as well as physiography and hydrogeology. Evaluating and utilizing the Duke Combined Hydrology Model (DCHM), reasonably accurate estimates of streamflow during periods of low flow were obtained in part due to the model’s ability to capture subsurface processes. Being able to accurately simulate streamflow levels and subsurface interactions during periods of drought can be very valuable to water suppliers, decision makers, and ultimately impact citizens. Knowledge of future droughts and periods of low flow in addition to tracking customer demand will allow for better management practices on the part of water suppliers such as knowing when they should withdraw more water during a surplus so that the level of stress on the system is minimized when there is not ample water supply.
Resumo:
This paper introduces the LiDAR compass, a bounded and extremely lightweight heading estimation technique that combines a two-dimensional laser scanner and axis maps, which represent the orientations of flat surfaces in the environment. Although suitable for a variety of indoor and outdoor environments, the LiDAR compass is especially useful for embedded and real-time applications requiring low computational overhead. For example, when combined with a sensor that can measure translation (e.g., wheel encoders) the LiDAR compass can be used to yield accurate, lightweight, and very easily implementable localization that requires no prior mapping phase. The utility of using the LiDAR compass as part of a localization algorithm was tested on a widely-available open-source data set, an indoor environment, and a larger-scale outdoor environment. In all cases, it was shown that the growth in heading error was bounded, which significantly reduced the position error to less than 1% of the distance travelled.