33 resultados para perceptual scepticism
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Liberalism claims that for a subject S to be justified in believing p, a proposition about the external world, on the basis of his senses it is not necessary to be antecedently justified in believing propositions as there is an external world. On the other hand, conservatism claims that to be justified in believing that p on the basis of one’s perception, one must have antecedent justification to believe that p. Intuitively, we are inclined to think that liberalism about the structure of perceptual justification fits better with our epistemic practices. We acknowledge that, although we cannot produce warrant for the background belief in the external world, our empirical beliefs can be perceptually justified. However, I am interested in arguing that conservatism is theoretically better supported than liberalism. The first reason to defend this is based on the fact that in embracing liberalism dogmatism is affected by pervasive problems. The second one comes from recognizing the strength of the argument based on the thesis that experience is theory-laden. But not all are advantages for conservatism. Conservatism is presupposed in contemporary formulations of scepticism through the requirement of prior justification for background assumptions, and this fact compels anti-sceptical conservatives to conceive a non-evidential form of warrant, entitlement, to contest the sceptical threat My main worry is that, although the path of entitlement has some prospects to succeed, this new notion of justification seems to be posed ad hoc for conservatives to solve the sceptical problem. These contents are organized along the three chapters. The result of chapter 1 is a pattern of sceptical argument formed by two premises: P1*, a conservative principle, and P2*. In chapter 2 and chapter 3 two anti-sceptical proposals against the argument sketched in chapter 1 are described. Chapter 2 is devoted to explain and assess a first anti-sceptical proposal that denies P1*: dogmatism. Moreover, in chapter 3, another anti-sceptical strategy is described (the route of entitlement) that contests scepticism denying the plausibility of P2*.
Resumo:
Perceptual maps have been used for decades by market researchers to illuminatethem about the similarity between brands in terms of a set of attributes, to position consumersrelative to brands in terms of their preferences, or to study how demographic and psychometricvariables relate to consumer choice. Invariably these maps are two-dimensional and static. Aswe enter the era of electronic publishing, the possibilities for dynamic graphics are opening up.We demonstrate the usefulness of introducing motion into perceptual maps through fourexamples. The first example shows how a perceptual map can be viewed in three dimensions,and the second one moves between two analyses of the data that were collected according todifferent protocols. In a third example we move from the best view of the data at the individuallevel to one which focuses on between-group differences in aggregated data. A final exampleconsiders the case when several demographic variables or market segments are available foreach respondent, showing an animation with increasingly detailed demographic comparisons.These examples of dynamic maps use several data sets from marketing and social scienceresearch.
Resumo:
This paper presents findings from a study investigating a firm s ethical practices along the value chain. In so doing we attempt to better understand potential relationships between a firm s ethical stance with its customers and those of its suppliers within a supply chain and identify particular sectoral and cultural influences that might impinge on this. Drawing upon a database comprising of 667 industrial firms from 27 different countries, we found that ethical practices begin with the firm s relationship with its customers, the characteristics of which then influence the ethical stance with the firm s suppliers within the supply chain. Importantly, market structure along with some key cultural characteristics were also found to exert significant influence on the implementation of ethical policies in these firms.
Resumo:
Estudi elaborat a partir d’una estada a la School of Modern Languages de la University of London, Gran Bretanya, entre agost i desembre del 2006. L’objectiu de la recerca consisteix en exposar el moviment empirista a través de Hume, Locke, Berkeley i altres filòsofs del segle XVIII. A més, s’analitza la filosofia escocesa del sentit comú, ja que va influenciar la filosofia catalana durant la “Renaixença”. El seu fundador, Thomas Reid, és conegut perquè va introduir una filosofia que no seguia l’escepticisme dels filòsofs citats. Sintetitzant, Hume va afirmar que l’experiència del sentit consisteix exclusivament en idees o impressions subjectives en la ment. Una resposta aquest “sistema ideal” va ser la filosofia del sentit comú que es va desenvolupar com a reacció a l’escepticisme de David Hume i altres filòsofs escocesos. Contra aquest “sistema ideal” la nova escola considera que l’experiència ordinària dels homes dona instintivament certes creences de la pròpia existència; de la existència dels objectes reals directament percebuts; i de “principis bàsics” basats en creences morals i religioses. Entre 1816 a 1870 la doctrina escocesa va ser adoptada com a filosofia oficial a França. Els seus principis van obtenir força a través de Víctor Cousin i de la traducció de les obres de Thomas Reid al francès per Jouffroy. Serà doncs, a partir de les traduccions franceses que Ramon Martí d’Eixalà va introduir a Catalunya la filosofia escocesa (no existeix cap prova que Martí d’Eixalà hagués conegut les versions angleses de les obres de Reid). En conclusió, el moviment escocès del sentit comú va influenciar l’escola catalana de filosofia.
Resumo:
JPEG2000 és un estàndard de compressió d’imatges que utilitza la transformada wavelet i, posteriorment, una quantificació uniforme dels coeficients amb dead-zone. Els coeficients wavelet presenten certes dependències tant estadístiques com visuals. Les dependències estadístiques es tenen en compte a l'esquema JPEG2000, no obstant, no passa el mateix amb les dependències visuals. En aquest treball, es pretén trobar una representació més adaptada al sistema visual que la que proporciona JPEG2000 directament. Per trobar-la utilitzarem la normalització divisiva dels coeficients, tècnica que ja ha demostrat resultats tant en decorrelació estadística de coeficients com perceptiva. Idealment, el que es voldria fer és reconvertir els coeficients a un espai de valors en els quals un valor més elevat dels coeficients impliqui un valor més elevat d'aportació visual, i utilitzar aquest espai de valors per a codificar. A la pràctica, però, volem que el nostre sistema de codificació estigui integrat a un estàndard. És per això que utilitzarem JPEG2000, estàndard de la ITU que permet una elecció de les distorsions en la codificació, i utilitzarem la distorsió en el domini de coeficients normalitzats com a mesura de distorsió per a escollir quines dades s'envien abans.
Resumo:
Aquest treball de recerca tracta de la relació existent entre pedagogia, traducció, llengües estrangeres i intel•ligències múltiples. El debat sobre si la traducció és una eina útil a la classe de llengües estrangeres és un tema actual sobre el qual molts investigadors encara indaguen. Estudis recents, però, han demostrat que qualsevol tasca de traducció -en la qual s’hi poden incloure treballs amb les diferents habilitats- és profitosa si la considerem un mitjà, no una finalitat en ella mateixa. Evidentment, l’ús de la traducció dins l’aula és avantatjosa, però també hem de tenir presents certs desavantatges d’aquesta aplicació. Un possible desavantatge podria ser la creença que, al principi, molta gent té referent a l’equivalència, paraula per paraula, d’una llengua vers una altra. Però després de presentar vàries tasques de traducció als estudiants, aquests poden arribar a controlar, fins i tot, les traduccions inconscients i poden assolir un cert nivell de precisió i flexibilitat que val la pena mencionar. Però l’avantatge principal és que s’enfronten a una activitat molt estesa dins la societat actual que combina dues llengües, la llengua materna i la llengua objecte d’estudi, per exemple. De tot això en podem deduir que utilitzar la llengua materna a la classe no s’ha de considerar un crim, com fins ara, sinó una virtut, evidentment si és emprada correctament. En aquest treball de recerca s’hi pot trobar una síntesi tant de les principals teories d’adquisició i aprenentatge de llengües com de les teories de traducció. A la pregunta de si les teories, tant de traducció com de llengües estrangeres, s’haurien d’ensenyar implícita o explícitament, es pot inferir que segons el nivell d’estudis on estiguin els aprenents els convindrà aprendre les teories explícitament o les aprendran, de totes maneres, implícitament. Com que qualsevol grup d’estudiants és heterogeni -és a dir que cada individu té un ritme i un nivell d’aprenentatge concret i sobretot cadascú té diferents estils de percepció (visual, auditiu, gustatiu, olfactiu, de moviment) i per tant diferents intel•ligències-, els professors ho han de tenir en compte a l’hora de planificar qualsevol programa d’actuació vers els alumnes. Per tant, podem concloure que les tasques o projectes de traducció poden ajudar als alumnes a aprendre millor, més eficaçment i a aconseguir un aprenentatge més significatiu.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
TFC sobre normalització del volum d'arxius MP3, emmarcat dins d'un projecte més ampli que inclou la lectura d'arxius MP3, la modificació per a implementar-ne el guany i el desenvolupament d'una interfície d'usuari que permeti aplicar la normalització a fitxers MP3. La part que s'ha elaborat és la fase intermèdia, consistent en el càlcul del guany de volum que caldria aplicar a un arxiu de so per a aconseguir normalitzar-ne el volum.
Resumo:
Aquest projecte es pot dividir en tres parts: una primera part d'extracció dels components freqüencials de les trames d'arxius MP3, una segona part d'anàlisi i càlcul d'un factor de normalització a partir de les dades dels components freqüencials de diversos arxius MP3, i una última part amb la modificació correcta dels guanys de les trames dels arxius MP3 a partir del factor de normalització generat en la part anterior. En aquest treball de final de carrera s'implementen la primera i la tercera de les parts descrites anteriorment.
Resumo:
L'estàndard MPEG-1 Layer III va ser creat fa poc més de 10 anys i en aquest curt espai de temps ha revolucionat el fins aleshores estable món de la música. El fet de poder comprimir tota una cançó en uns pocs 'megues' (3 o 4) sense una pèrdua apreciable de qualitat i la proliferació d'ordinadors connectats a Internet va fer que el tràfic de fitxers en aquest format col·lapsés més d'un servidor. Per això, no és estrany que apareguessin autèntiques col·leccions de fitxers musicals en format MP3 procedents de les fonts més variades. Aquest fet (la diversitat de les fonts) i la variabilitat entre els diferents codificadors fa que el volum del so d'aquests fitxers disti molt de ser semblant. I això és precisament el que procura aconseguir aquest projecte: fer que tota col·lecció de fitxers MP3 soni igual de fort.
Resumo:
Report for the scientific sojourn carried out at the University Medical Center, Swiss, from 2010 to 2012. Abundant evidence suggests that negative emotional stimuli are prioritized in the perceptual systems, eliciting enhanced neural responses in early sensory regions as compared with neutral information. This facilitated detection is generally paralleled by larger neural responses in early sensory areas, relative to the processing of neutral information. In this sense, the amygdala and other limbic regions, such as the orbitofrontal cortex, may play a critical role by sending modulatory projections onto the sensory cortices via direct or indirect feedback.The present project aimed at investigating two important issues regarding these mechanisms of emotional attention, by means of functional magnetic resonance imaging. In Study I, we examined the modulatory effects of visual emotion signals on the processing of task-irrelevant visual, auditory, and somatosensory input, that is, the intramodal and crossmodal effects of emotional attention. We observed that brain responses to auditory and tactile stimulation were enhanced during the processing of visual emotional stimuli, as compared to neutral, in bilateral primary auditory and somatosensory cortices, respectively. However, brain responses to visual task-irrelevant stimulation were diminished in left primary and secondary visual cortices in the same conditions. The results also suggested the existence of a multimodal network associated with emotional attention, presumably involving mediofrontal, temporal and orbitofrontal regions Finally, Study II examined the different brain responses along the low-level visual pathways and limbic regions, as a function of the number of retinal spikes during visual emotional processing. The experiment used stimuli resulting from an algorithm that simulates how the visual system perceives a visual input after a given number of retinal spikes. The results validated the visual model in human subjects and suggested differential emotional responses in the amygdala and visual regions as a function of spike-levels. A list of publications resulting from work in the host laboratory is included in the report.
Resumo:
Report for the scientific sojourn carried out at the University Medical Center, Swiss, from 2010 to 2012. Abundant evidence suggests that negative emotional stimuli are prioritized in the perceptual systems, eliciting enhanced neural responses in early sensory regions as compared with neutral information. This facilitated detection is generally paralleled by larger neural responses in early sensory areas, relative to the processing of neutral information. In this sense, the amygdala and other limbic regions, such as the orbitofrontal cortex, may play a critical role by sending modulatory projections onto the sensory cortices via direct or indirect feedback.The present project aimed at investigating two important issues regarding these mechanisms of emotional attention, by means of functional magnetic resonance imaging. In Study I, we examined the modulatory effects of visual emotion signals on the processing of task-irrelevant visual, auditory, and somatosensory input, that is, the intramodal and crossmodal effects of emotional attention. We observed that brain responses to auditory and tactile stimulation were enhanced during the processing of visual emotional stimuli, as compared to neutral, in bilateral primary auditory and somatosensory cortices, respectively. However, brain responses to visual task-irrelevant stimulation were diminished in left primary and secondary visual cortices in the same conditions. The results also suggested the existence of a multimodal network associated with emotional attention, presumably involving mediofrontal, temporal and orbitofrontal regions Finally, Study II examined the different brain responses along the low-level visual pathways and limbic regions, as a function of the number of retinal spikes during visual emotional processing. The experiment used stimuli resulting from an algorithm that simulates how the visual system perceives a visual input after a given number of retinal spikes. The results validated the visual model in human subjects and suggested differential emotional responses in the amygdala and visual regions as a function of spike-levels. A list of publications resulting from work in the host laboratory is included in the report.
Resumo:
Recently, there has been an increased interest on the neural mechanisms underlying perceptual decision making. However, the effect of neuronal adaptation in this context has not yet been studied. We begin our study by investigating how adaptation can bias perceptual decisions. We considered behavioral data from an experiment on high-level adaptation-related aftereffects in a perceptual decision task with ambiguous stimuli on humans. To understand the driving force behind the perceptual decision process, a biologically inspired cortical network model was used. Two theoretical scenarios arose for explaining the perceptual switch from the category of the adaptor stimulus to the opposite, nonadapted one. One is noise-driven transition due to the probabilistic spike times of neurons and the other is adaptation-driven transition due to afterhyperpolarization currents. With increasing levels of neural adaptation, the system shifts from a noise-driven to an adaptation-driven modus. The behavioral results show that the underlying model is not just a bistable model, as usual in the decision-making modeling literature, but that neuronal adaptation is high and therefore the working point of the model is in the oscillatory regime. Using the same model parameters, we studied the effect of neural adaptation in a perceptual decision-making task where the same ambiguous stimulus was presented with and without a preceding adaptor stimulus. We find that for different levels of sensory evidence favoring one of the two interpretations of the ambiguous stimulus, higher levels of neural adaptation lead to quicker decisions contributing to a speed–accuracy trade off.
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.
Resumo:
Whereas people are typically thought to be better off with more choices, studiesshow that they often prefer to choose from small as opposed to large sets of alternatives.We propose that satisfaction from choice is an inverted U-shaped function of thenumber of alternatives. This proposition is derived theoretically by considering thebenefits and costs of different numbers of alternatives and is supported by fourexperimental studies. We also manipulate the perceptual costs of information processingand demonstrate how this affects the resulting satisfaction function. We furtherindicate that satisfaction when choosing from a given set is diminished if people aremade aware of the existence of other choice sets. The role of individual differences insatisfaction from choice is documented by noting effects due to gender and culture. Weconclude by emphasizing the need to have an explicit rationale for knowing how muchchoice is enough.