909 resultados para Interval signals and systems
Resumo:
Power laws, also known as Pareto-like laws or Zipf-like laws, are commonly used to explain a variety of real world distinct phenomena, often described merely by the produced signals. In this paper, we study twelve cases, namely worldwide technological accidents, the annual revenue of America׳s largest private companies, the number of inhabitants in America׳s largest cities, the magnitude of earthquakes with minimum moment magnitude equal to 4, the total burned area in forest fires occurred in Portugal, the net worth of the richer people in America, the frequency of occurrence of words in the novel Ulysses, by James Joyce, the total number of deaths in worldwide terrorist attacks, the number of linking root domains of the top internet domains, the number of linking root domains of the top internet pages, the total number of human victims of tornadoes occurred in the U.S., and the number of inhabitants in the 60 most populated countries. The results demonstrate the emergence of statistical characteristics, very close to a power law behavior. Furthermore, the parametric characterization reveals complex relationships present at higher level of description.
Resumo:
Higher education students demand fast feedback about their assignments and the opportunity to repeat them in case they do in a wrong way. Here a computer based trainer for Signals and Systems students is presented. An application, that automatically generates and assesses thousands of numerically different versions of several Signals and Systems problems have been developed. This applet guides the students to find the solution and automatically assesses and grades the students proposed solution. The students can use the application to practice in solving several types of Signals and Systems basic problems. After selecting the problem type, the student introduces a seed and the application generates a numerical version of the selected problem. Then the application presents a sequence of questions that the students must solve and the application automatically assess their answers. After solving a given problem, the students can repeat the same numerical variation of the problem by introducing the same seed to the application. In this way, they can review their solution with the help of the hints given by the application for wrong solutions. This application can also be used as an automatic assessment tool by the instructor. When the assessment is made in a controlled environment (examination classroom or laboratory) the instructor can use the same seed for all students. Otherwise, different seeds can be assigned to different students and in this way they solve different numerical variation of the proposed problem, so cheating becomes an arduous task. Given a problem type, the mathematical or conceptual difficulty of the problem can vary depending on the numerical values of the parameters of the problem. The application permits to easily select groups of seeds that yield to numerical variations with similar mathematical or conceptual difficulty. This represents an advantage over a randomised task assignment where students are asked to solve tasks with different difficulty.
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
Myopia (short-sightedness) is a common ocular disorder of children and young adults. Studies primarily using animal models have shown that the retina controls eye growth and the outer retina is likely to have a key role. One theory is that the proportion of L (long-wavelength-sensitive) and M (medium-wavelength-sensitive) cones is related to myopia development; with a high L/M cone ratio predisposing individuals to myopia. However, not all dichromats (persons with red-green colour vision deficiency) with extreme L/M cone ratios have high refractive errors. We predict that the L/M cone ratio will vary in individuals with normal trichromatic colour vision but not show a systematic difference simply due to refractive error. The aim of this study was to determine if L/M cone ratios in the central 30° are different between myopic and emmetropic young, colour normal adults. Information about L/M cone ratios was determined using the multifocal visual evoked potential (mfVEP). The mfVEP can be used to measure the response of visual cortex to different visual stimuli. The visual stimuli were generated and measurements performed using the Visual Evoked Response Imaging System (VERIS 5.1). The mfVEP was measured when the L and M cone systems were separately stimulated using the method of silent substitution. The method of silent substitution alters the output of three primary lights, each with physically different spectral distributions to control the excitation of one or more photoreceptor classes without changing the excitation of the unmodulated photoreceptor classes. The stimulus was a dartboard array subtending 30° horizontally and 30° vertically on a calibrated LCD screen. The m-sequence of the stimulus was 215-1. The N1-P1 amplitude ratio of the mfVEP was used to estimate the L/M cone ratio. Data were collected for 30 young adults (22 to 33 years of age), consisting of 10 emmetropes (+0.3±0.4 D) and 20 myopes (–3.4±1.7 D). The stimulus and analysis techniques were confirmed using responses of two dichromats. For the entire participant group, the estimated central L/M cone ratios ranged from 0.56 to 1.80 in the central 3°-13° diameter ring and from 0.94 to 1.91 in the more peripheral 13°-30° diameter ring. Within 3°-13°, the mean L/M cone ratio of the emmetropic group was 1.20±0.33 and the mean was similar, 1.20±0.26, for the myopic group. For the 13°-30° ring, the mean L/M cone ratio of the emmetropic group was 1.48±0.27 and it was slightly lower in the myopic group, 1.30±0.27. Independent-samples t-test indicated no significant difference between the L/M cone ratios of the emmetropic and myopic group for either the central 3°-13° ring (p=0.986) or the more peripheral 13°-30° ring (p=0.108). The similar distributions of estimated L/M cone ratios in the sample of emmetropes and myopes indicates that there is likely to be no association between the L/M cone ratio and refractive error in humans.
Resumo:
This paper presents some new criteria for uniform and nonuniform asymptotic stability of equilibria for time-variant differential equations and this within a Lyapunov approach. The stability criteria are formulated in terms of certain observability conditions with the output derived from the Lyapunov function. For some classes of systems, this system theoretic interpretation proves to be fruitful since - after establishing the invariance of observability under output injection - this enables us to check the stability criteria on a simpler system. This procedure is illustrated for some classical examples.
Resumo:
Communication signal processing applications often involve complex-valued (CV) functional representations for signals and systems. CV artificial neural networks have been studied theoretically and applied widely in nonlinear signal and data processing [1–11]. Note that most artificial neural networks cannot be automatically extended from the real-valued (RV) domain to the CV domain because the resulting model would in general violate Cauchy-Riemann conditions, and this means that the training algorithms become unusable. A number of analytic functions were introduced for the fully CV multilayer perceptrons (MLP) [4]. A fully CV radial basis function (RBF) nework was introduced in [8] for regression and classification applications. Alternatively, the problem can be avoided by using two RV artificial neural networks, one processing the real part and the other processing the imaginary part of the CV signal/system. A even more challenging problem is the inverse of a CV
Resumo:
The diversity of floral forms has long been considered a prime example of radiation through natural selection. However, little is still known about the evolution of floral traits, a critical piece of evidence for the understanding of the processes that may have driven flower evolution. We studied the pattern of evolution of quantitative floral traits in a group of Neotropical lianas (Bignonieae, Bignoniaceae) and used a time-calibrated phylogeny as basis to: (1) test for phylogenetic signal in 16 continuous floral traits; (2) evaluate the rate of evolution in those traits; and (3) reconstruct the ancestral state of the individual traits. Variation in floral traits among extant species of Bignonieae was highly explained by their phylogenetic history. However, opposite signals were found in floral traits associated with the attraction of pollinators (calyx and corolla) and pollen transfer (androecium and gynoecium), suggesting a differential role of selection in different floral whorls. Phylogenetic independent contrasts indicate that traits evolved at different rates, whereas ancestral character state reconstructions indicate that the ancestral size of most flower traits was larger than the mean observed sizes of the same traits in extant species. The implications of these patterns for the reproductive biology of Bignonieae are discussed. (C) 2011 The Linnean Society of London, Biological Journal of the Linnean Society, 2011, 102, 378-390.
Resumo:
The dichotomy between two groups of workers on neuroelectrical activity is retarding progress. To study the interrelations between neuronal unit spike activity and compound field potentials of cell populations is both unfashionable and technically challenging. Neither of the mutual disparagements is justified: that spikes are to higher functions as the alphabet is to Shakespeare and that slow field potentials are irrelevant epiphenomena. Spikes are not the basis of the neural code but of multiple codes that coexist with nonspike codes. Field potentials are mainly information-rich signs of underlying processes, but sometimes they are also signals for neighboring cells, that is, they exert influence. This paper concerns opportunities for new research with many channels of wide-band (spike and slow wave) recording. A wealth of structure in time and three-dimensional space is different at each scale—micro-, meso-, and macroactivity. The depth of our ignorance is emphasized to underline the opportunities for uncovering new principles. We cannot currently estimate the relative importance of spikes and synaptic communication vs. extrasynaptic graded signals. In spite of a preponderance of literature on the former, we must consider the latter as probably important. We are in a primitive stage of looking at the time series of wide-band voltages in the compound, local field, potentials and of choosing descriptors that discriminate appropriately among brain loci, states (functions), stages (ontogeny, senescence), and taxa (evolution). This is not surprising, since the brains in higher species are surely the most complex systems known. They must be the greatest reservoir of new discoveries in nature. The complexity should not deter us, but a dose of humility can stimulate the flow of imaginative juices.
Resumo:
BACKGROUND: Multiple recent genome-wide association studies (GWAS) have identified a single nucleotide polymorphism (SNP), rs10771399, at 12p11 that is associated with breast cancer risk. METHOD: We performed a fine-scale mapping study of a 700 kb region including 441 genotyped and more than 1300 imputed genetic variants in 48,155 cases and 43,612 controls of European descent, 6269 cases and 6624 controls of East Asian descent and 1116 cases and 932 controls of African descent in the Breast Cancer Association Consortium (BCAC; http://bcac.ccge.medschl.cam.ac.uk/ ), and in 15,252 BRCA1 mutation carriers in the Consortium of Investigators of Modifiers of BRCA1/2 (CIMBA). Stepwise regression analyses were performed to identify independent association signals. Data from the Encyclopedia of DNA Elements project (ENCODE) and the Cancer Genome Atlas (TCGA) were used for functional annotation. RESULTS: Analysis of data from European descendants found evidence for four independent association signals at 12p11, represented by rs7297051 (odds ratio (OR) = 1.09, 95 % confidence interval (CI) = 1.06-1.12; P = 3 × 10(-9)), rs805510 (OR = 1.08, 95 % CI = 1.04-1.12, P = 2 × 10(-5)), and rs1871152 (OR = 1.04, 95 % CI = 1.02-1.06; P = 2 × 10(-4)) identified in the general populations, and rs113824616 (P = 7 × 10(-5)) identified in the meta-analysis of BCAC ER-negative cases and BRCA1 mutation carriers. SNPs rs7297051, rs805510 and rs113824616 were also associated with breast cancer risk at P < 0.05 in East Asians, but none of the associations were statistically significant in African descendants. Multiple candidate functional variants are located in putative enhancer sequences. Chromatin interaction data suggested that PTHLH was the likely target gene of these enhancers. Of the six variants with the strongest evidence of potential functionality, rs11049453 was statistically significantly associated with the expression of PTHLH and its nearby gene CCDC91 at P < 0.05. CONCLUSION: This study identified four independent association signals at 12p11 and revealed potentially functional variants, providing additional insights into the underlying biological mechanism(s) for the association observed between variants at 12p11 and breast cancer risk
Resumo:
Purpose: To determine (a) the effect of different sunglass tint colorations on traffic signal detection and recognition for color normal and color deficient observers, and (b) the adequacy of coloration requirements in current sunglass standards. Methods: Twenty color-normals and 49 color-deficient males performed a tracking task while wearing sunglasses of different colorations (clear, gray, green, yellow-green, yellow-brown, red-brown). At random intervals, simulated traffic light signals were presented against a white background at 5° to the right or left and observers were instructed to identify signal color (red/yellow/green) by pressing a response button as quickly as possible; response times and response errors were recorded. Results: Signal color and sunglass tint had significant effects on response times and error rates (p < 0.05), with significant between-color group differences and interaction effects. Response times for color deficient people were considerably slower than color normals for both red and yellow signals for all sunglass tints, but for green signals they were only noticeably slower with the green and yellow-green lenses. For most of the color deficient groups, there were recognition errors for yellow signals combined with the yellow-green and green tints. In addition, deuteranopes had problems for red signals combined with red-brown and yellow-brown tints, and protanopes had problems for green signals combined with the green tint and for red signals combined with the red-brown tint. Conclusions: Many sunglass tints currently permitted for drivers and riders cause a measurable decrement in the ability of color deficient observers to detect and recognize traffic signals. In general, combinations of signals and sunglasses of similar colors are of particular concern. This is prima facie evidence of a risk in the use of these tints for driving and cautions against the relaxation of coloration limits in sunglasses beyond those represented in the study.