950 resultados para frequency-doubling efficiency
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
Abnormalities in the topology of brain networks may be an important feature and etiological factor for psychogenic non-epileptic seizures (PNES). To explore this possibility, we applied a graph theoretical approach to functional networks based on resting state EEGs from 13 PNES patients and 13 age- and gender-matched controls. The networks were extracted from Laplacian-transformed time-series by a cross-correlation method. PNES patients showed close to normal local and global connectivity and small-world structure, estimated with clustering coefficient, modularity, global efficiency, and small-worldness (SW) metrics, respectively. Yet the number of PNES attacks per month correlated with a weakness of local connectedness and a skewed balance between local and global connectedness quantified with SW, all in EEG alpha band. In beta band, patients demonstrated above-normal resiliency, measured with assortativity coefficient, which also correlated with the frequency of PNES attacks. This interictal EEG phenotype may help improve differentiation between PNES and epilepsy. The results also suggest that local connectivity could be a target for therapeutic interventions in PNES. Selective modulation (strengthening) of local connectivity might improve the skewed balance between local and global connectivity and so prevent PNES events.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
We consider two new approaches to nonparametric estimation of the leverage effect. The first approach uses stock prices alone. The second approach uses the data on stock prices as well as a certain volatility instrument, such as the CBOE volatility index (VIX) or the Black-Scholes implied volatility. The theoretical justification for the instrument-based estimator relies on a certain invariance property, which can be exploited when high frequency data is available. The price-only estimator is more robust since it is valid under weaker assumptions. However, in the presence of a valid volatility instrument, the price-only estimator is inefficient as the instrument-based estimator has a faster rate of convergence. We consider two empirical applications, in which we study the relationship between the leverage effect and the debt-to-equity ratio, credit risk, and illiquidity.
Resumo:
The increasing interest in the interaction of light with electricity and electronically active materials made the materials and techniques for producing semitransparent electrically conducting films particularly attractive. Transparent conductors have found major applications in a number of electronic and optoelectronic devices including resistors, transparent heating elements, antistatic and electromagnetic shield coatings, transparent electrode for solar cells, antireflection coatings, heat reflecting mirrors in glass windows and many other. Tin doped indium oxide (indium tin oxide or ITO) is one of the most commonly used transparent conducting oxides. At present and likely well into the future this material offers best available performance in terms of conductivity and transmittivity combined with excellent environmental stability, reproducibility and good surface morphology. Although partial transparency, with a reduction in conductivity, can be obtained for very thin metallic films, high transparency and simultaneously high conductivity cannot be attained in intrinsic stoichiometric materials. The only way this can be achieved is by creating electron degeneracy in a wide bandgap (Eg > 3eV or more for visible radiation) material by controllably introducing non-stoichiometry and/or appropriate dopants. These conditions can be conveniently met for ITO as well as a number of other materials like Zinc oxide, Cadmium oxide etc. ITO shows interesting and technologically important combination of properties viz high luminous transmittance, high IR reflectance, good electrical conductivity, excellent substrate adherence and chemical inertness. ITO is a key part of solar cells, window coatings, energy efficient buildings, and flat panel displays. In solar cells, ITO can be the transparent, conducting top layer that lets light into the cell to shine the junction and lets electricity flow out. Improving the ITO layer can help improve the solar cell efficiency. A transparent ii conducting oxide is a material with high transparency in a derived part of the spectrum and high electrical conductivity. Beyond these key properties of transparent conducting oxides (TCOs), ITO has a number of other key characteristics. The structure of ITO can be amorphous, crystalline, or mixed, depending on the deposition temperature and atmosphere. The electro-optical properties are a function of the crystallinity of the material. In general, ITO deposited at room temperature is amorphous, and ITO deposited at higher temperatures is crystalline. Depositing at high temperatures is more expensive than at room temperature, and this method may not be compatible with the underlying devices. The main objective of this thesis work is to optimise the growth conditions of Indium tin oxide thin films at low processing temperatures. The films are prepared by radio frequency magnetron sputtering under various deposition conditions. The films are also deposited on to flexible substrates by employing bias sputtering technique. The films thus grown were characterised using different tools. A powder x-ray diffractometer was used to analyse the crystalline nature of the films. The energy dispersive x-ray analysis (EDX) and scanning electron microscopy (SEM) were used for evaluating the composition and morphology of the films. Optical properties were investigated using the UVVIS- NIR spectrophotometer by recording the transmission/absorption spectra. The electrical properties were studied using vander Pauw four probe technique. The plasma generated during the sputtering of the ITO target was analysed using Langmuir probe and optical emission spectral studies.
Resumo:
The effect of coupling on two high frequency modulated semiconductor lasers is numerically studied. The phase diagrams and bifurcatio.n diagrams are drawn. As the coupling constant is increased the system goes from chaotic to periodic behavior through a reverse period doubling sequence. The Lyapunov exponent is calculated to characterize chaotic and periodic regions.
Resumo:
Preferred structures in the surface pressure variability are investigated in and compared between two 100-year simulations of the Hadley Centre climate model HadCM3. In the first (control) simulation, the model is forced with pre-industrial carbon dioxide concentration (1×CO2) and in the second simulation the model is forced with doubled CO2 concentration (2×CO2). Daily winter (December-January-February) surface pressures over the Northern Hemisphere are analysed. The identification of preferred patterns is addressed using multivariate mixture models. For the control simulation, two significant flow regimes are obtained at 5% and 2.5% significance levels within the state space spanned by the leading two principal components. They show a high pressure centre over the North Pacific/Aleutian Islands associated with a low pressure centre over the North Atlantic, and its reverse. For the 2×CO2 simulation, no such behaviour is obtained. At higher-dimensional state space, flow patterns are obtained from both simulations. They are found to be significant at the 1% level for the control simulation and at the 2.5% level for the 2×CO2 simulation. Hence under CO2 doubling, regime behaviour in the large-scale wave dynamics weakens. Doubling greenhouse gas concentration affects both the frequency of occurrence of regimes and also the pattern structures. The less frequent regime becomes amplified and the more frequent regime weakens. The largest change is observed over the Pacific where a significant deepening of the Aleutian low is obtained under CO2 doubling.
Resumo:
There exist two central measures of turbulent mixing in turbulent stratified fluids that are both caused by molecular diffusion: 1) the dissipation rate D(APE) of available potential energy APE; 2) the turbulent rate of change Wr, turbulent of background gravitational potential energy GPEr. So far, these two quantities have often been regarded as the same energy conversion, namely the irreversible conversion of APE into GPEr, owing to the well known exact equality D(APE)=Wr, turbulent for a Boussinesq fluid with a linear equation of state. Recently, however, Tailleux (2009) pointed out that the above equality no longer holds for a thermally-stratified compressible, with the ratio ξ=Wr, turbulent/D(APE) being generally lower than unity and sometimes even negative for water or seawater, and argued that D(APE) and Wr, turbulent actually represent two distinct types of energy conversion, respectively the dissipation of APE into one particular subcomponent of internal energy called the "dead" internal energy IE0, and the conversion between GPEr and a different subcomponent of internal energy called "exergy" IEexergy. In this paper, the behaviour of the ratio ξ is examined for different stratifications having all the same buoyancy frequency N vertical profile, but different vertical profiles of the parameter Υ=α P/(ρCp), where α is the thermal expansion coefficient, P the hydrostatic pressure, ρ the density, and Cp the specific heat capacity at constant pressure, the equation of state being that for seawater for different particular constant values of salinity. It is found that ξ and Wr, turbulent depend critically on the sign and magnitude of dΥ/dz, in contrast with D(APE), which appears largely unaffected by the latter. These results have important consequences for how the mixing efficiency should be defined and measured in practice, which are discussed.
Resumo:
We evaluate the profitability and technical efficiency of aquaculture in the Philippines. Farm-level data are used to compare two production systems corresponding to the intensive monoculture of tilapia in freshwater ponds and the extensive polyculture of shrimps and fish in brackish water ponds. Both activities are very lucrative, with brackish water aquaculture achieving the higher level of profit per farm. Stochastic frontier production functions reveal that technical efficiency is low in brackish water aquaculture, with a mean of 53%, explained primarily by the operator's experience and by the frequency of his visits to the farm. In freshwater aquaculture, the farms achieve a mean efficiency level of 83%. The results suggest that the provision of extension services to brackish water fish farms might be a cost-effective way of increasing production and productivity in that sector. By contrast, technological change will have to be the driving force of future productivity growth in freshwater aquaculture.
Resumo:
1. Suspension feeding by caseless caddisfly larvae (Trichoptera) constitutes a major pathway for energy flow, and strongly influences productivity, in streams and rivers. 2. Consideration of the impact of these animals on lotic ecosystems has been strongly influenced by a single study investigating the efficiency of particle capture of nets built by one species of hydropsychid caddisfly. 3. Using water sampling techniques at appropriate spatial scales, and taking greater consideration of local hydrodynamics than previously, we examined the size-frequency distribution of particles captured by the nets of Hydropsyche siltalai. Our results confirm that capture nets are selective in terms of particle size, and in addition suggest that this selectivity is for particles likely to provide the most energy. 4. By incorporating estimates of flow diversion around the nets of caseless caddisfly larvae, we show that capture efficiency (CE) is considerably higher than previously estimated, and conclude that more consideration of local hydrodynamics is needed to evaluate the efficiency of particle capture. 5. We use our results to postulate a mechanistic explanation for a recent example of interspecific facilitation, whereby a reduction of near-bed velocities seen in single species monocultures leads to increased capture rates and local depletion of seston within the region of reduced velocity.
Resumo:
Measuring pollinator performance has become increasingly important with emerging needs for risk assessment in conservation and sustainable agriculture that require multi-year and multi-site comparisons across studies. However, comparing pollinator performance across studies is difficult because of the diversity of concepts and disparate methods in use. Our review of the literature shows many unresolved ambiguities. Two different assessment concepts predominate: the first estimates stigmatic pollen deposition and the underlying pollinator behaviour parameters, while the second estimates the pollinator’s contribution to plant reproductive success, for example in terms of seed set. Both concepts include a number of parameters combined in diverse ways and named under a diversity of synonyms and homonyms. However, these concepts are overlapping because pollen deposition success is the most frequently used proxy for assessing the pollinator’s contribution to plant reproductive success. We analyse the diverse concepts and methods in the context of a new proposed conceptual framework with a modular approach based on pollen deposition, visit frequency, and contribution to seed set relative to the plant’s maximum female reproductive potential. A system of equations is proposed to optimize the balance between idealised theoretical concepts and practical operational methods. Our framework permits comparisons over a range of floral phenotypes, and spatial and temporal scales, because scaling up is based on the same fundamental unit of analysis, the single visit.
Resumo:
An efficient market incorporates news into prices immediately and fully. Tests for efficiency in financial markets have been undermined by information leakage. We test for efficiency in sports betting markets – real-world markets where news breaks remarkably cleanly. Applying a novel identification to high-frequency data, we investigate the reaction of prices to goals scored on the ‘cusp’ of half-time. This strategy allows us to separate the market's response to major news (a goal), from its reaction to the continual flow of minor game-time news. On our evidence, prices update swiftly and fully.
Resumo:
Single-carrier (SC) block transmission with frequency-domain equalisation (FDE) offers a viable transmission technology for combating the adverse effects of long dispersive channels encountered in high-rate broadband wireless communication systems. However, for high bandwidthefficiency and high power-efficiency systems, the channel can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such nonlinear Hammerstein channels, the standard SC-FDE scheme no longer works. This paper advocates a complex-valued (CV) B-spline neural network based nonlinear SC-FDE scheme for Hammerstein channels. Specifically, We model the nonlinear HPA, which represents the CV static nonlinearity of the Hammerstein channel, by a CV B-spline neural network, and we develop two efficient alternating least squares schemes for estimating the parameters of the Hammerstein channel, including both the channel impulse response coefficients and the parameters of the CV B-spline model. We also use another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse B-spline neural network model obtained in time domain. Extensive simulation results are included to demonstrate the effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels.
Resumo:
The efficiency of a Wireless Power Transfer (WPT) system is greatly dependent on both the geometry and operating frequency of the transmitting and receiving structures. By using Coupled Mode Theory (CMT), the figure of merit is calculated for resonantly-coupled loop and dipole systems. An in-depth analysis of the figure of merit is performed with respect to the key geometric parameters of the loops and dipoles, along with the resonant frequency, in order to identify the key relationships leading to high-efficiency WPT. For systems consisting of two identical single-turn loops, it is shown that the choice of both the loop radius and resonant frequency are essential in achieving high-efficiency WPT. For the dipole geometries studied, it is shown that the choice of length is largely irrelevant and that as a result of their capacitive nature, low-MHz frequency dipoles are able to produce significantly higher figures of merit than those of the loops considered. The results of the figure of merit analysis are used to propose and subsequently compare two mid-range loop and dipole WPT systems of equal size and operating frequency, where it is shown that the dipole system is able to achieve higher efficiencies than the loop system of the distance range examined.
Resumo:
The El Niño/Southern Oscillation is Earth’s most prominent source of interannual climate variability, alternating irregularly between El Niño and La Niña, and resulting in global disruption of weather patterns, ecosystems, fisheries and agriculture1, 2, 3, 4, 5. The 1998–1999 extreme La Niña event that followed the 1997–1998 extreme El Niño event6 switched extreme El Niño-induced severe droughts to devastating floods in western Pacific countries, and vice versa in the southwestern United States4, 7. During extreme La Niña events, cold sea surface conditions develop in the central Pacific8, 9, creating an enhanced temperature gradient from the Maritime continent to the central Pacific. Recent studies have revealed robust changes in El Niño characteristics in response to simulated future greenhouse warming10, 11, 12, but how La Niña will change remains unclear. Here we present climate modelling evidence, from simulations conducted for the Coupled Model Intercomparison Project phase 5 (ref. 13), for a near doubling in the frequency of future extreme La Niña events, from one in every 23 years to one in every 13 years. This occurs because projected faster mean warming of the Maritime continent than the central Pacific, enhanced upper ocean vertical temperature gradients, and increased frequency of extreme El Niño events are conducive to development of the extreme La Niña events. Approximately 75% of the increase occurs in years following extreme El Niño events, thus projecting more frequent swings between opposite extremes from one year to the next.