444 resultados para superposition
Resumo:
A well-known property of orientation-tuned neurons in the visual cortex is that they are suppressed by the superposition of an orthogonal mask. This phenomenon has been explained in terms of physiological constraints (synaptic depression), engineering solutions for components with poor dynamic range (contrast normalization) and fundamental coding strategies for natural images (redundancy reduction). A common but often tacit assumption is that the suppressive process is equally potent at different spatial and temporal scales of analysis. To determine whether it is so, we measured psychophysical cross-orientation masking (XOM) functions for flickering horizontal Gabor stimuli over wide ranges of spatio-temporal frequency and contrast. We found that orthogonal masks raised contrast detection thresholds substantially at low spatial frequencies and high temporal frequencies (high speeds), and that small and unexpected levels of facilitation were evident elsewhere. The data were well fit by a functional model of contrast gain control, where (i) the weight of suppression increased with the ratio of temporal to spatial frequency and (ii) the weight of facilitatory modulation was the same for all conditions, but outcompeted by suppression at higher contrasts. These results (i) provide new constraints for models of primary visual cortex, (ii) associate XOM and facilitation with the transient magno- and sustained parvostreams, respectively, and (iii) reconcile earlier conflicting psychophysical reports on XOM.
Resumo:
Stimuli from one family of complex motions are defined by their spiral pitch, where cardinal axes represent signed expansion and rotation. Intermediate spirals are represented by intermediate pitches. It is well established that vision contains mechanisms that sum over space and direction to detect these stimuli (Morrone et al., Nature 376 (1995) 507) and one possibility is that four cardinal mechanisms encode the entire family. We extended earlier work (Meese & Harris, Vision Research 41 (2001) 1901) using subthreshold summation of random dot kinematograms and a two-interval forced choice technique to investigate this possibility. In our main experiments, the spiral pitch of one component was fixed and that of another was varied in steps of 15° relative to the first. Regardless of whether the fixed component was aligned with cardinal axes or an intermediate spiral, summation to-coherence-threshold between the two components declined as a function of their difference in spiral pitch. Similar experiments showed that none of the following were critical design features or stimulus parameters for our results: superposition of signal dots, limited life-time dots, the presence of speed gradients, stimulus size or the number of dots. A simplex algorithm was used to fit models containing mechanisms spaced at a pitch of either 90° (cardinal model) or 45° (cardinal+model) and combined using a fourth-root summation rule. For both models, direction half-bandwidth was equated for all mechanisms and was the only free parameter. Only the cardinal+model could account for the full set of results. We conclude that the detection of complex motion in human vision requires both cardinal and spiral mechanisms with a half-bandwidth of approximately 46°. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
It is well known that optic flow - the smooth transformation of the retinal image experienced by a moving observer - contains valuable information about the three-dimensional layout of the environment. From psychophysical and neurophysiological experiments, specialised mechanisms responsive to components of optic flow (sometimes called complex motion) such as expansion and rotation have been inferred. However, it remains unclear (a) whether the visual system has mechanisms for processing the component of deformation and (b) whether there are multiple mechanisms that function independently from each other. Here, we investigate these issues using random-dot patterns and a forced-choice subthreshold summation technique. In experiment 1, we manipulated the size of a test region that was permitted to contain signal and found substantial spatial summation for signal components of translation, expansion, rotation, and deformation embedded in noise. In experiment 2, little or no summation was found for the superposition of orthogonal pairs of complex motion patterns (eg expansion and rotation), consistent with probability summation between pairs of independent detectors. Our results suggest that optic-flow components are detected by mechanisms that are specialised for particular patterns of complex motion.
Resumo:
Two fundamental laser physics phenomena - dissipative soliton and polarisation of light are recently merged to the concept of vector dissipative soliton (VDS), viz. train of short pulses with specific state of polarisation (SOP) and shape defined by an interplay between anisotropy, gain/loss, dispersion, and nonlinearity. Emergence of VDSs is both of the fundamental scientific interest and is also a promising technique for control of dynamic SOPs important for numerous applications from nano-optics to high capacity fibre optic communications. Using specially designed and developed fast polarimeter, we present here the first experimental results on SOP evolution of vector soliton molecules with periodic polarisation switching between two and three SOPs and superposition of polarisation switching with SOP precessing. The underlying physics presents an interplay between linear and circular birefringence of a laser cavity along with light induced anisotropy caused by polarisation hole burning.
Resumo:
Hydrogen assisted subcritical cleavage of the ferrite matrix occurs during fatigue of a duplex stainless steel in gaseous hydrogen. The ferrite fails by a cyclic cleavage mechanism and fatigue crack growth rates are independent of frequency between 0.1 and 5 Hz. Macroscopic crack growth rates are controlled by the fraction of ferrite grains cleaving along the crack front, which can be related to the maximum stress intensity, Kmax. A superposition model is developed to predict simultaneously the effects of stress intensity range (ΔK) and K ratio (Kmin/Kmax). The effect of Kmax is rationalised by a local cleavage criterion which requires a critical tensile stress, normal to the {001} cleavage plane, acting over a critical distance within an embrittled zone at the crack tip. © 1991.
Resumo:
The task of approximation-forecasting for a function, represented by empirical data was investigated. Certain class of the functions as forecasting tools: so called RFT-transformers, – was proposed. Least Square Method and superposition are the principal composing means for the function generating. Besides, the special classes of beam dynamics with delay were introduced and investigated to get classical results regarding gradients. These results were applied to optimize the RFT-transformers. The effectiveness of the forecast was demonstrated on the empirical data from the Forex market.
Resumo:
The resonant slow light structures created along a thin-walled optical capillary by nanoscale deformation of its surface can perform comprehensive simultaneous detection and manipulation of microfluidic components. This concept is illustrated with a model of a 0.5 mm long, 5 nm high, triangular bottle resonator created at a 50 μm radius silica capillary containing floating microparticles. The developed theory shows that the microparticle positions can be determined from the bottle resonator spectrum. In addition, the microparticles can be driven and simultaneously positioned at predetermined locations by the localized electromagnetic field created by the optimized superposition of eigenstates of this resonator, thus exhibiting a multicomponent, near-field optical tweezer.
Resumo:
AMS Subj. Classification: 47J10, 47H30, 47H10
Resumo:
2000 Mathematics Subject Classification: Primary 60G55; secondary 60G25.
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system's EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter's components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled
Resumo:
Contemporary society lives surrounded by various types of risks, causing individuals to be taken by a constant feeling of fear and insecurity, as the negative risks, always bring some harm to the population directly or indirectly involved. The city of Natal has several risk areas, especially on the outskirts of the city, due to the occupation of spaces that have laws and / or natural physical limitations as well as the lack of urban organization, thus increasing the vulnerability of the population living in these areas. The principal objective of this research was to map the areas of social vulnerability and natural hazards in Natal, taking into account the interrelationships between social vulnerability and differential exposure to natural hazards. Therefore, it was necessary to establish, according to the methodology used, the degree of social vulnerability and vulnerability to natural hazard in which individuals are subject, to establish the relationship between society/ risks. In this case, the methodology proposed by Crepani (2001) which is based on ecodynamic Tricart (1977), which classifies the areas of risk and the degree of vulnerability of these areas according to the morphodynamic processes for preparation of the Vulnerability Index was used Physical natural, and for organizing the social Vulnerability Index adopted an adaptation of the Paulista social Vulnerability Index, prepared by SEADE (Fundação Sistema Estadual de Análise de Dados) of the State of São Paulo, drawing on data that denote social disadvantage at the census tract level. Then, with the superposition of these two indices, it was elaborated a Socioenvironmental Vulnerability Index. Thus, it is concluded that besides spatialize areas of risk which indicates the degree of vulnerability of individuals potentially exposed to natural hazard.
Resumo:
Nella tesi è analizzata nel dettaglio una proposta didattica sulla Fisica Quantistica elaborata dal gruppo di ricerca in Didattica della Fisica dell’Università di Bologna, in collaborazione con il gruppo di ricerca in Fisica Teorica e con ricercatori del CNR di Bologna. La proposta è stata sperimentata in diverse classi V di Liceo scientifico e dalle sperimentazioni sono emersi casi significativi di studenti che non sono riusciti ad accettare la teoria quantistica come descrizione convincente ad affidabile della realtà fisica (casi di non accettazione), nonostante sembrassero aver capito la maggior parte degli argomenti e essersi ‘appropriati’ del percorso per come gli era stato proposto. Da questa evidenza sono state formulate due domande di ricerca: (1) qual è la natura di questa non accettazione? Rispecchia una presa di posizione epistemologica o è espressione di una mancanza di comprensione profonda? (2) Nel secondo caso, è possibile individuare precisi meccanismi cognitivi che possono ostacolare o facilitare l’accettazione della fisica quantistica? L’analisi di interviste individuali degli studenti ha permesso di mettere in luce tre principali esigenze cognitive (cognitive needs) che sembrano essere coinvolte nell’accettazione e nell’apprendimento della fisica quantistica: le esigenze di visualizzabilità, comparabilità e di ‘realtà’. I ‘cognitive needs’ sono stati quindi utilizzati come strumenti di analisi delle diverse proposte didattiche in letteratura e del percorso di Bologna, al fine di metterne in luce le criticità. Sono state infine avanzate alcune proposte per un suo miglioramento.
Resumo:
Far-field stresses are those present in a volume of rock prior to excavations being created. Estimates of the orientation and magnitude of far-field stresses, often used in mine design, are generally obtained by single-point measurements of stress, or large-scale, regional trends. Point measurements can be a poor representation of far-field stresses as a result of excavation-induced stresses and geological structures. For these reasons, far-field stress estimates can be associated with high levels of uncertainty. The purpose of this thesis is to investigate the practical feasibility, applications, and limitations of calibrating far-field stress estimates through tunnel deformation measurements captured using LiDAR imaging. A method that estimates the orientation and magnitude of excavation-induced principal stress changes through back-analysis of deformation measurements from LiDAR imaged tunnels was developed and tested using synthetic data. If excavation-induced stress change orientations and magnitudes can be accurately estimated, they can be used in the calibration of far-field stress input to numerical models. LiDAR point clouds have been proven to have a number of underground applications, thus it is desired to explore their use in numerical model calibration. The back-analysis method is founded on the superposition of stresses and requires a two-dimensional numerical model of the deforming tunnel. Principal stress changes of known orientation and magnitude are applied to the model to create calibration curves. Estimation can then be performed by minimizing squared differences between the measured tunnel and sets of calibration curve deformations. In addition to the back-analysis estimation method, a procedure consisting of previously existing techniques to measure tunnel deformation using LiDAR imaging was documented. Under ideal conditions, the back-analysis method estimated principal stress change orientations within ±5° and magnitudes within ±2 MPa. Results were comparable for four different tunnel profile shapes. Preliminary testing using plastic deformation, a rough tunnel profile, and profile occlusions suggests that the method can work under more realistic conditions. The results from this thesis set the groundwork for the continued development of a new, inexpensive, and efficient far-field stress estimate calibration method.
Resumo:
This paper presents a 1-10 GHz low-noise downconvert mixer RFIC suitable for wideband receivers. A switched transconductor mixing core is adopted to reduce noise at high frequencies. By adding a series inductor to the RF transconductor, a flat 4-5 dB noise figure (NF) and a high gain of 26.5 dB can be achieved over a broad bandwidth out to 10 GHz. A CMOS output amplifier is also integrated on-chip, employing derivative superposition (DS) for high linearity and an OIP3 of 16.5 dBm. The circuit consumes less than 20 mW of dc power and occupies an active chip area of less than 0.2 mm2.
Resumo:
Nonlinear thermo-mechanical properties of advanced polymers are crucial to accurate prediction of the process induced warpage and residual stress of electronics packages. The Fiber Bragg grating (FBG) sensor based method is advanced and implemented to determine temperature and time dependent nonlinear properties. The FBG sensor is embedded in the center of the cylindrical specimen, which deforms together with the specimen. The strains of the specimen at different loading conditions are monitored by the FBG sensor. Two main sources of the warpage are considered: curing induced warpage and coefficient of thermal expansion (CTE) mismatch induced warpage. The effective chemical shrinkage and the equilibrium modulus are needed for the curing induced warpage prediction. Considering various polymeric materials used in microelectronic packages, unique curing setups and procedures are developed for elastomers (extremely low modulus, medium viscosity, room temperature curing), underfill materials (medium modulus, low viscosity, high temperature curing), and epoxy molding compound (EMC: high modulus, high viscosity, high temperature pressure curing), most notably, (1) zero-constraint mold for elastomers; (2) a two-stage curing procedure for underfill materials and (3) an air-cylinder based novel setup for EMC. For the CTE mismatch induced warpage, the temperature dependent CTE and the comprehensive viscoelastic properties are measured. The cured cylindrical specimen with a FBG sensor embedded in the center is further used for viscoelastic property measurements. A uni-axial compressive loading is applied to the specimen to measure the time dependent Young’s modulus. The test is repeated from room temperature to the reflow temperature to capture the time-temperature dependent Young’s modulus. A separate high pressure system is developed for the bulk modulus measurement. The time temperature dependent bulk modulus is measured at the same temperatures as the Young’s modulus. The master curve of the Young’s modulus and bulk modulus of the EMC is created and a single set of the shift factors is determined from the time temperature superposition. The supplementary experiments are conducted to verify the validity of the assumptions associated with the linear viscoelasticity. The measured time-temperature dependent properties are further verified by a shadow moiré and Twyman/Green test.