993 resultados para Myth. Poetry. Image. Reflection. Soul. Plotinus
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.
Resumo:
Quantum indeterminism is frequently invoked as a solution to the problem of how a disembodied soul might interact with the brain (as Descartes proposed), and is sometimes invoked in theories of libertarian free will even when they do not involve dualistic assumptions. Taking as example the Eccles-Beck model of interaction between self (or soul) and brain at the level of synaptic exocytosis, I here evaluate the plausibility of these approaches. I conclude that Heisenbergian uncertainty is too small to affect synaptic function, and that amplification by chaos or by other means does not provide a solution to this problem. Furthermore, even if Heisenbergian effects did modify brain functioning, the changes would be swamped by those due to thermal noise. Cells and neural circuits have powerful noise-resistance mechanisms, that are adequate protection against thermal noise and must therefore be more than sufficient to buffer against Heisenbergian effects. Other forms of quantum indeterminism must be considered, because these can be much greater than Heisenbergian uncertainty, but these have not so far been shown to play a role in the brain.
Resumo:
We use basic probability theory and simple replicable electronic search experiments to evaluate some reported “myths” surrounding the origins and evolution of the QWERTY standard. The resulting evidence is strongly supportive of arguments put forward by Paul A. David (1985) and W. Brian Arthur (1989) that QWERTY was path dependent with its course of development strongly influenced by specific historical circumstances. The results also include the unexpected finding that QWERTY was as close to an optimal solution to a serious but transient problem as could be expected with the resources at the disposal of its designers in 1873.
Resumo:
There are some striking similarities and some differences between the seismic reflection sections recorded across the fold and thrust belts of the southeast Canadian Cordillera, Quebec-Maine Appalachians and Swiss Alps. In the fold and thrust belts of all three mountain ranges, seismic reflection surveys have yielded high-quality images of. (1) nappes (thin thrust sheets) stacked on top of ancient continental margins; (2) ramp anticlines in the hanging walls of faults that have ramp-flat or listric geometries; (3) back thrusts and back folds that developed during the terminal phases of orogeny; and (4) tectonic wedges and regional decollements. A principal result of the Cordilleran and Appalachian deep crustal studies has been the recognition of master decollements along which continental margin strata have been transported long distances, whereas a principal result of the Swiss Alpine deep crustal program has been the identification of the Adriatic indenter, a crustal-scale wedge that caused delamination of the European lithosphere. Significant crustal roots are observed beneath the fold and thrust belts of the Alps, southeast Canadian Cordillera and parts of the southern Appalachians, but such structures beneath the northern Appalachians have probably been removed by post-orogenic collapse and/or crustal attenuation associated with the Mesozoic opening of the Atlantic Ocean.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Images obtained from high-throughput mass spectrometry (MS) contain information that remains hidden when looking at a single spectrum at a time. Image processing of liquid chromatography-MS datasets can be extremely useful for quality control, experimental monitoring and knowledge extraction. The importance of imaging in differential analysis of proteomic experiments has already been established through two-dimensional gels and can now be foreseen with MS images. We present MSight, a new software designed to construct and manipulate MS images, as well as to facilitate their analysis and comparison.