46 resultados para Pair echos
Resumo:
We report experimental and numerical results showing how certain N-dimensional dynamical systems are able to exhibit complex time evolutions based on the nonlinear combination of N-1 oscillation modes. The experiments have been done with a family of thermo-optical systems of effective dynamical dimension varying from 1 to 6. The corresponding mathematical model is an N-dimensional vector field based on a scalar-valued nonlinear function of a single variable that is a linear combination of all the dynamic variables. We show how the complex evolutions appear associated with the occurrence of successive Hopf bifurcations in a saddle-node pair of fixed points up to exhaust their instability capabilities in N dimensions. For this reason the observed phenomenon is denoted as the full instability behavior of the dynamical system. The process through which the attractor responsible for the observed time evolution is formed may be rather complex and difficult to characterize. Nevertheless, the well-organized structure of the time signals suggests some generic mechanism of nonlinear mode mixing that we associate with the cluster of invariant sets emerging from the pair of fixed points and with the influence of the neighboring saddle sets on the flow nearby the attractor. The generation of invariant tori is likely during the full instability development and the global process may be considered as a generalized Landau scenario for the emergence of irregular and complex behavior through the nonlinear superposition of oscillatory motions
Resumo:
Background: The cooperative interaction between transcription factors has a decisive role in the control of the fate of the eukaryotic cell. Computational approaches for characterizing cooperative transcription factors in yeast, however, are based on different rationales and provide a low overlap between their results. Because the wealth of information contained in protein interaction networks and regulatory networks has proven highly effective in elucidating functional relationships between proteins, we compared different sets of cooperative transcription factor pairs (predicted by four different computational methods) within the frame of those networks. Results: Our results show that the overlap between the sets of cooperative transcription factors predicted by the different methods is low yet significant. Cooperative transcription factors predicted by all methods are closer and more clustered in the protein interaction network than expected by chance. On the other hand, members of a cooperative transcription factor pair neither seemed to regulate each other nor shared similar regulatory inputs, although they do regulate similar groups of target genes. Conclusion: Despite the different definitions of transcriptional cooperativity and the different computational approaches used to characterize cooperativity between transcription factors, the analysis of their roles in the framework of the protein interaction network and the regulatory network indicates a common denominator for the predictions under study. The knowledge of the shared topological properties of cooperative transcription factor pairs in both networks can be useful not only for designing better prediction methods but also for better understanding the complexities of transcriptional control in eukaryotes.
Resumo:
We investigate the problem of finding minimum-distortion policies for streaming delay-sensitive but distortion-tolerant data. We consider cross-layer approaches which exploit the coupling between presentation and transport layers. We make the natural assumption that the distortion function is convex and decreasing. We focus on a single source-destination pair and analytically find the optimum transmission policy when the transmission is done over an error-free channel. This optimum policy turns out to be independent of the exact form of the convex and decreasing distortion function. Then, for a packet-erasure channel, we analytically find the optimum open-loop transmission policy, which is also independent of the form of the convex distortion function. We then find computationally efficient closed-loop heuristic policies and show, through numerical evaluation, that they outperform the open-loop policy and have near optimal performance.
Resumo:
The quality of the time dedicated to child care has potential positive effects on children’s life chances. However, the determinants of parental time allocation to child care remain largely unexplored, particularly in context undergoing rapid family change such as Spain. We assess two alternative explanations for differences between parents in the amount of time spent with children. The first, based in the relative resources hypothesis, links variation in time spent with children to the relative attributes (occupation, education or income) of one partner to the other. The second, derived from the social status hypothesis, suggests that variation in time spent with children is attributable to the relative social position of the pair (i.e. higher status couples spend more time with children regardless of within-couple difference).To investigate theses questions, we use a sample of adults (18-50) from the Spanish Time Use Survey (STUS) 2002-2003 (n=7,438). Limiting the analysis to adults who are married or in consensual unions, the STUS allows to assess both the quantity and quality of parental time spent with children. We find little support for the “relative resources hypothesis”. Instead, consistent with the “social status hypothesis”, we find that time spent on child care is attributable to the social position of the couple, regardless of between-parent differences in income of education.
Resumo:
El nostre objectiu principal ha estat estudiar el desenvolupament de competències discursives de l’alumnat (d’origen) estranger que contribueixin a entendre i atendre les seves necessitats socials i educatives a l’aula (de matemàtiques) multilingüe. Amb aquesta intenció, hem dut a terme accions científiques a dos nivells: amb professorat i amb estudiants. Quant a la caracterització de la complexitat normativa de l’aula de matemàtiques multilingüe, tal com estava previst: 1) hem exemplificat diverses normes socials i lingüístiques existents en el desenvolupament de pràctiques matemàtiques a l’aula; i 2) hem particularitzat el fenomen de la diversitat de normes socials i lingüístiques en casos de sessions de classe de secundària. Quant a la documentació d'indicadors de progrés en la comprensió de normes socials i lingüístiques de l’aula, i en el desenvolupament de competències discursives d’adequació a aquestes normes, tal com estava previst: 1) hem caracteritzat estratègies d’ensenyament i aprenentatge de normes socials i lingüístiques en situacions d’interacció social en petit i gran grup; i 2) hem construït criteris de seguiment del grau de desenvolupament de competències discursives d’adequació a les normes, tant pel que a professorat com alumnat. Finalment, quant a l'anàlisi de la contribució de les competències discursives a la construcció d’identitats socials, lingüístiques i matemàtiques compartides: 1) hem estudiat els usos que l’estudiant (d’origen) estranger fa de normes escolars vinculades a pràctiques socials, lingüístiques i matemàtiques; i 2) hem examinat la construcció de significats socials, lingüístics i matemàtics compartits en un ampli ventall de processos d’adequació a normes de l’aula orquestrades pel professorat de la nostra mostra.
Resumo:
In this paper we proose the infimum of the Arrow-Pratt index of absoluterisk aversion as a measure of global risk aversion of a utility function.We then show that, for any given arbitrary pair of distributions, thereexists a threshold level of global risk aversion such that all increasingconcave utility functions with at least as much global risk aversion wouldrank the two distributions in the same way. Furthermore, this thresholdlevel is sharp in the sense that, for any lower level of global riskaversion, we can find two utility functions in this class yielding oppositepreference relations for the two distributions.
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
Graphical displays which show inter--sample distances are importantfor the interpretation and presentation of multivariate data. Except whenthe displays are two--dimensional, however, they are often difficult tovisualize as a whole. A device, based on multidimensional unfolding, isdescribed for presenting some intrinsically high--dimensional displays infewer, usually two, dimensions. This goal is achieved by representing eachsample by a pair of points, say $R_i$ and $r_i$, so that a theoreticaldistance between the $i$-th and $j$-th samples is represented twice, onceby the distance between $R_i$ and $r_j$ and once by the distance between$R_j$ and $r_i$. Self--distances between $R_i$ and $r_i$ need not be zero.The mathematical conditions for unfolding to exhibit symmetry are established.Algorithms for finding approximate fits, not constrained to be symmetric,are discussed and some examples are given.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
It is generally accepted that the extent of phenotypic change between human and great apes is dissonant with the rate of molecular change. Between these two groups, proteins are virtually identical, cytogenetically there are few rearrangements that distinguish ape-human chromosomes, and rates of single-base-pair change and retrotransposon activity have slowed particularly within hominid lineages when compared to rodents or monkeys. Studies of gene family evolution indicate that gene loss and gain are enriched within the primate lineage. Here, we perform a systematic analysis of duplication content of four primate genomes (macaque, orang-utan, chimpanzee and human) in an effort to understand the pattern and rates of genomic duplication during hominid evolution. We find that the ancestral branch leading to human and African great apes shows the most significant increase in duplication activity both in terms of base pairs and in terms of events. This duplication acceleration within the ancestral species is significant when compared to lineage-specific rate estimates even after accounting for copy-number polymorphism and homoplasy. We discover striking examples of recurrent and independent gene-containing duplications within the gorilla and chimpanzee that are absent in the human lineage. Our results suggest that the evolutionary properties of copy-number mutation differ significantly from other forms of genetic mutation and, in contrast to the hominid slowdown of single-base-pair mutations, there has been a genomic burst of duplication activity at this period during human evolution.
Resumo:
The main information sources to study a particular piece of music are symbolic scores and audio recordings. These are complementary representations of the piece and it isvery useful to have a proper linking between the two of the musically meaningful events. For the case of makam music of Turkey, linking the available scores with the correspondingaudio recordings requires taking the specificities of this music into account, such as the particular tunings, the extensive usage of non-notated expressive elements, and the way in which the performer repeats fragmentsof the score. Moreover, for most of the pieces of the classical repertoire, there is no score written by the original composer. In this paper, we propose a methodology to pair sections of a score to the corresponding fragments of audio recording performances. The pitch information obtained from both sources is used as the common representationto be paired. From an audio recording, fundamental frequency estimation and tuning analysis is done to compute a pitch contour. From the corresponding score, symbolic note names and durations are converted to a syntheticpitch contour. Then, a linking operation is performed between these pitch contours in order to find the best correspondences.The method is tested on a dataset of 11 compositions spanning 44 audio recordings, which are mostly monophonic. An F3-score of 82% and 89% are obtained with automatic and semi-automatic karar detection respectively,showing that the methodology may give us a needed tool for further computational tasks such as form analysis, audio-score alignment and makam recognition.
Resumo:
We present a rule-based Huet’s style anti-unification algorithm for simply-typed lambda-terms in ɳ long β normal form, which computes a least general higher-order pattern generalization. For a pair of arbitrary terms of the same type, such a generalization always exists and is unique modulo α equivalence and variable renaming. The algorithm computes it in cubic time within linear space. It has been implemented and the code is freely available
Resumo:
Spatial resolution is a key parameter of all remote sensing satellites and platforms. The nominal spatial resolution of satellites is a well-known characteristic because it is directly related to the area in ground that represents a pixel in the detector. Nevertheless, in practice, the actual resolution of a specific image obtained from a satellite is difficult to know precisely because it depends on many other factors such as atmospheric conditions. However, if one has two or more images of the same region, it is possible to compare their relative resolutions. In this paper, a wavelet-decomposition-based method for the determination of the relative resolution between two remotely sensed images of the same area is proposed. The method can be applied to panchromatic, multispectral, and mixed (one panchromatic and one multispectral) images. As an example, the method was applied to compute the relative resolution between SPOT-3, Landsat-5, and Landsat-7 panchromatic and multispectral images taken under similar as well as under very different conditions. On the other hand, if the true absolute resolution of one of the images of the pair is known, the resolution of the other can be computed. Thus, in the last part of this paper, a spatial calibrator that is designed and constructed to help compute the absolute resolution of a single remotely sensed image is described, and an example of its use is presented.
Resumo:
Purpose: To test whether the association between childhood adversity and positive and negative psychotic experiences is due to genetic confounding. Method: Childhood adversity and psychotic experiences were assessed in a sample of 226 twins from the general population. A monozygotic (MZ) twin differences approach was used to assess possible genetic confounding. Results: In the whole sample, childhood adversity was significantly associated with positive (β =.45; SE=.16; p=.008) and negative psychotic experiences (β=.77; SE=.18; p<.01). Within-pair MZ twin differences in exposure to childhood adversity were significantly associated with differences in positive (β =.71; SE=.29; p=.016) and negative psychotic experiences (β =.98; SE=.38; p=.014) in a subsample of 86 MZ twin pairs. Conclusions: Individuals exposed to childhood adversity are more likely to report psychotic experiences. Furthermore, our findings indicate that unique environmental effects of childhood adversity contribute to the development of psychotic experiences.
Resumo:
We report the design and validation of simple magnetic tweezers for oscillating ferromagnetic beads in the piconewton and nanometer scales. The system is based on a single pair of coaxial coils operating in two sequential modes: permanent magnetization of the beads through a large and brief pulse of magnetic field and generation of magnetic gradients to produce uniaxial oscillatory forces. By using this two step method, the magnetic moment of the beads remains constant during measurements. Therefore, the applied force can be computed and varies linearly with the driving signal. No feedback control is required to produce well defined force oscillations over a wide bandwidth. The design of the coils was optimized to obtain high magnetic fields (280 mT) and gradients (2 T/m) with high homogeneity (5% variation) within the sample. The magnetic tweezers were implemented in an inverted optical microscope with a videomicroscopy-based multiparticle tracking system. The apparatus was validated with 4.5 ¿m magnetite beads obtaining forces up to ~2 pN and subnanometer resolution. The applicability of the device includes microrheology of biopolymer and cell cytoplasm, molecular mechanics, and mechanotransduction in living cells.