849 resultados para adaptive blind source separation method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A main method of predicting turbulent flows is to solve LES equations, which was called traditional LES method. The traditional LES method solves the motions of large eddies of size larger than filtering scale An while modeling unresolved scales less than Delta_n. Hughes et al argued that many shortcomings of the traditional LES approaches were associated with their inabilities to successfully differentiate between large and small scales. One may guess that a priori scale-separation would be better, because it can predict scale-interaction well compared with posteriori scale-separation. To this end, a multi-scale method was suggested to perform scale-separation computation. The primary contents of the multiscale method are l) A space average is used to differentiate scale. 2) The basic equations include the large scale equations and fluctuation equations. 3) The large-scale equations and fluctuation equations are coupled through turbulent stress terms. We use the multiscale equations of n=2, i.e., the large and small scale (LSS) equations, to simulate 3-D evolutions of a channel flow and a planar mixing layer flow Some interesting results are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A quadtree-based adaptive Cartesian grid generator and flow solver were developed. The grid adaptation based on pressure or density gradient was performed and a gridless method based on the least-square fashion was used to treat the wall surface boundary condition, which is generally difficult to be handled for the common Cartesian grid. First, to validate the technique of grid adaptation, the benchmarks over a forward-facing step and double Mach reflection were computed. Second, the flows over the NACA 0012 airfoil and a two-element airfoil were calculated to validate the developed gridless method. The computational results indicate the developed method is reasonable for complex flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A constrained high-order statistical algorithm is proposed to blindly deconvolute the measured spectral data and estimate the response function of the instruments simultaneously. In this algorithm, no prior-knowledge is necessary except a proper length of the unit-impulse response. This length can be easily set to be the width of the narrowest spectral line by observing the measured data. The feasibility of this method has been demonstrated experimentally by the measured Raman and absorption spectral data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Viruses possess very specific methods of targeting and entering cells. These methods would be extremely useful if they could also be applied to drug delivery, but little is known about the molecular mechanisms of the viral entry process. In order to gain further insight into mechanisms of viral entry, chemical and spectroscopic studies in two systems were conducted, examining hydrophobic protein-lipid interactions during Sendai virus membrane fusion, and the kinetics of bacteriophage λ DNA injection.

Sendai virus glycoprotein interactions with target membranes during the early stages of fusion were examined using time-resolved hydrophobic photoaffinity labeling with the lipid-soluble carbene generator3-(trifluoromethyl)-3-(m-^(125 )I] iodophenyl)diazirine (TID). The probe was incorporated in target membranes prior to virus addition and photolysis. During Sendai virus fusion with liposomes composed of cardiolipin (CL) or phosphatidylserine (PS), the viral fusion (F) protein is preferentially labeled at early time points, supporting the hypothesis that hydrophobic interaction of the fusion peptide at the N-terminus of the F_1 subunit with the target membrane is an initiating event in fusion. Correlation of the hydrophobic interactions with independently monitored fusion kinetics further supports this conclusion. Separation of proteins after labeling shows that the F_1 subunit, containing the putative hydrophobic fusion sequence, is exclusively labeled, and that the F_2 subunit does not participate in fusion. Labeling shows temperature and pH dependence consistent with a need for protein conformational mobility and fusion at neutral pH. Higher amounts of labeling during fusion with CL vesicles than during virus-PS vesicle fusion reflects membrane packing regulation of peptide insertion into target membranes. Labeling of the viral hemagglutinin/neuraminidase (HN) at low pH indicates that HN-mediated fusion is triggered by hydrophobic interactions, after titration of acidic amino acids. HN labeling under nonfusogenic conditions reveals that viral binding may involve hydrophobic as well as electrostatic interactions. Controls for diffusional labeling exclude a major contribution from this source. Labeling during reconstituted Sendai virus envelope-liposome fusion shows that functional reconstitution involves protein retention of the ability to undergo hydrophobic interactions.

Examination of Sendai virus fusion with erythrocyte membranes indicates that hydrophobic interactions also trigger fusion between biological membranes, and that HN binding may involve hydrophobic interactions as well. Labeling of the erythrocyte membranes revealed close membrane association of spectrin, which may play a role in regulating membrane fusion. The data show that hydrophobic fusion protein interaction with both artificial and biological membranes is a triggering event in fusion. Correlation of these results with earlier studies of membrane hydration and fusion kinetics provides a more detailed view of the mechanism of fusion.

The kinetics of DNA injection by bacteriophage λ. into liposomes bearing reconstituted receptors were measured using fluorescence spectroscopy. LamB, the bacteriophage receptor, was extracted from bacteria and reconstituted into liposomes by detergent removal dialysis. The DNA binding fluorophore ethidium bromide was encapsulated in the liposomes during dialysis. Enhanced fluorescence of ethidium bromide upon binding to injected DNA was monitored, and showed that injection is a rapid, one-step process. The bimolecular rate law, determined by the method of initial rates, revealed that injection occurs several times faster than indicated by earlier studies employing indirect assays.

It is hoped that these studies will increase the understanding of the mechanisms of virus entry into cells, and to facilitate the development of virus-mimetic drug delivery strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study is to improve the stability of pumping source of optical parametric amplifier. Analysis by simulation leads to the conclusion that the stability of the second harmonic can be improved by using properly the intensity of fundamental light and corresponding length of the crystal. By the method of the noncollinear two-pass second harmonic or the tandem second harmonic, the efficient crystal length is extended to a proper value, and the stability of the second harmonic output has been improved two times more than that for the fundamental light, and the conversion-efficiency is about 70% in experiment. When the variation of the fundamental light is about 10%, the variation of the second harmonic intensity has been controlled within 5%. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our study of a novel technique for adaptive image sequence coding is reported. The number of reference frames and the intervals between them are adjusted to improve the temporal compensability of the input video. The bits are distributed more efficiently on different frame types according to temporal and spatial complexity of the image scene. Experimental results show that this dynamic group-of-picture (GOP) structure coding scheme is not only feasible but also better than the conventional fixed GOP method in terms of perceptual quality and SNR. (C) 1996 Society of Photo-Optical Instrumentation Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A semi-blind equalization method is proposed based on combination of adaptive and blind equalization techniques, which is more effective for optical signal processing in time-varied band-limited channel. The numerical simulation of Poisson noise OOK optical pulse signal in a band-limited channel using digital equalization techniques is performed, and the results are compared. The semi-blind equalization matchs the channel faster and sustains convergence were identified. In addition, the wavelet de-noise technique is introduced in the de-nosing area of optical signa process. The criteria of choosing wavelet basises is obtained that smooth wavelet soft threshold method is better. The corresponding numerical simulation is also conducted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new calibration method for a photoelastic modulator is proposed. The calibration includes a coarse calibration and a fine calibration. In the coarse calibration, the peak retardation of the photoelastic modulator is set near 1.841 rad. In the fine calibration, the value of the zeroth Bessel function is obtained. The zeroth Bessel function is approximated as a linear equation to directly calculate the peak retardation. In experiments, the usefulness of the calibration method is verified and the calibration error is less than 0.014 rad. The calibration is immune to the intensity fluctuation of the light source and independent of the circuit parameters. The method specially suits the calibration of a photoelastic modulator with a peak retardation of less than a half-wavelength. (c) 2007 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Procedures for sampling genomic DNA from live billfishes involve manual restraint and tissue excision that can be difficult to carry out and may produce stresses that affect fish survival. We examined the collection of surface mucous as a less invasive alternative method for sourcing genomic DNA by comparing it to autologous muscle tissue samples from Atlantic blue marlin (Makaira nigricans), white marlin (Tetrapturus albidus), sailfish (Istiophorus platypterus), and swordfish (Xiphias gladius). Purified DNA from mucous was comparable to muscle and was suitable for conventional polymerase chain reaction, random amplified polymorphic DNA analysis, and mitochondrial and nuclear locus sequencing. The nondestructive and less invasive characteristics of surface mucous collection may promote increased survival of released specimens and may be advantageous for other marine fish genetic studies, particularly those involving large live specimens destined for release.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a high current impedance matching method for narrowband power-line communication (NPLC) systems. The impedance of the power-line channel is time and location variant; therefore, coupling circuitry and the channel are not usually matched. This not only results in poor signal integrity at the receiving end, but also leads to a higher transmission power requirement to secure the communication process. To offset this negative effect, a high-current adaptive impedance circuit to enable impedance matching in power-line networks is reported. The approach taken is to match the channel impedance of N-PLC systems is based on the General Impedance Converter (GIC). In order to achieve high current a special coupler in which the inductive impedance can be altered by adjusting a microcontroller controlled digital resistor is demonstrated. It is shown that the coupler works well with heavy load current in power line networks. It works in both low and high transmitting current modes, a current as high as 760 mA has been obtained. Besides, compared with other adaptive impedance couplers, the advantages include higher matching resolution and a simple control interface. Experimental results are presented to demonstrate the operation of the coupler. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive cluster sampling (ACS) has been the subject of many publications about sampling aggregated populations. Choosing the criterion value that invokes ACS remains problematic. We address this problem using data from a June 1999 ACS survey for rockfish, specifically for Pacific ocean perch (Sebastes alutus), and for shortraker (S. borealis) and rougheye (S. aleutianus) rockfish combined. Our hypotheses were that ACS would outperform simple random sampling (SRS) for S. alutus and would be more applicable for S. alutus than for S. borealis and S. aleutianus combined because populations of S. alutus are thought to be more aggregated. Three alternatives for choosing a criterion value were investigated. We chose the strategy that yielded the lowest criterion value and simulated the higher criterion values with the data after the survey. Systematic random sampling was conducted across the whole area to determine the lowest criterion value, and then a new systematic random sample was taken with adaptive sampling around each tow that exceeded the fixed criterion value. ACS yielded gains in precision (SE) over SRS. Bootstrapping showed that the distribution of an ACS estimator is approximately normal, whereas the SRS sampling distribution is skewed and bimodal. Simulation showed that a higher criterion value results in substantially less adaptive sampling with little tradeoff in precision. When time-efficiency was examined, ACS quickly added more samples, but sampling edge units caused this efficiency to be lessened, and the gain in efficiency did not measurably affect our conclusions. ACS for S. alutus should be incorporated with a fixed criterion value equal to the top quartile of previously collected survey data. The second hypothesis was confirmed because ACS did not prove to be more effective for S. borealis-S. aleutianus. Overall, our ACS results were not as optimistic as those previously published in the literature, and indicate the need for further study of this sampling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a numerical method for the simulation of flow in turbomachinery blade rows using a solution-adaptive mesh methodology. The fully three-dimensional, compressible, Reynolds-averaged Navier-Stokes equations with k-ε turbulence modeling (and low Reynolds number damping terms) are solved on an unstructured mesh formed from tetrahedral finite volumes. At stages in the solution, mesh refinement is carried out based on flagging cell faces with either a fractional variation of a chosen variable (like Mach number) greater than a given threshold or with a mean value of the chosen variable within a given range. Several solutions are presented, including that for the highly three-dimensional flow associated with the corner stall and secondary flow in a transonic compressor cascade, to demonstrate the potential of the new method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper develops the basis for a self-consistent, operationally useful, reactive pollutant dispersion model, for application in urban environments. The model addresses the multi-scale nature of the physical and chemical processes and the interaction between the different scales. The methodology builds on existing techniques of source apportionment in pollutant dispersion and on reduction techniques of detailed chemical mechanisms. © 2005 Published by Elsevier Ltd.