62 resultados para false
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We sometimes vividly remember things that did not happen, a phenomenon with general relevance, not only in the courtroom. It is unclear to what extent individual differences in false memories are driven by anatomical differences in memory-relevant brain regions. Here we show in humans that microstructural properties of different white matter tracts as quantified using diffusion tensor imaging are strongly correlated with true and false memory retrieval. To investigate these hypotheses, we tested a large group of participants in a version of the Deese-Roediger-McDermott paradigm (recall and recognition) and subsequently obtained diffusion tensor images. A voxel-based whole-brain level linear regression analysis was performedto relatefractional anisotropyto indices oftrue andfalse memory recall and recognition. True memory was correlated to diffusion anisotropy in the inferior longitudinal fascicle, the major connective pathway of the medial temporal lobe, whereas a greater proneness to retrieve false items was related to the superior longitudinal fascicle connecting frontoparietal structures. Our results show that individual differences in white matter microstructure underlie true and false memory performance.
Resumo:
One of the most important milestones in the development of theory of mind is the understanding of false beliefs. This study compares children’s understanding of representational change and others’ false beliefs and evaluates the effectiveness of an appearance-reality training for improving children’s false belief understanding. A total of 78 children ranging in age from 41 to 47 months were trained in three sessions and evaluated in a pretest and in a posttest. The results show that for children it is easier to understand representational change than false beliefs in others, and that the improvement after training was greater when starting from a higher score in the pretest. The implications of this for training in false belief understanding are discussed
Resumo:
A social choice function is group strategy-proof on a domain if no group of agents can manipulate its final outcome to their own benefit by declaring false preferences on that domain. Group strategy-proofness is a very attractive requirement of incentive compatibility. But in many cases it is hard or impossible to find nontrivial social choice functions satisfying even the weakest condition of individual strategy-proofness. However, there are a number of economically significant domains where interesting rules satisfying individual strategy-proofness can be defined, and for some of them, all these rules turn out to also satisfy the stronger requirement of group strategy-proofness. This is the case, for example, when preferences are single-peaked or single-dipped. In other cases, this equivalence does not hold. We provide sufficient conditions defining domains of preferences guaranteeing that individual and group strategy-proofness are equivalent for all rules defined on the
Resumo:
The Great Tohoku-Kanto earthquake and resulting tsunami has brought considerable attention to the issue of the construction of new power plants. We argue in this paper, nuclear power is not a sustainable solution to energy problems. First, we explore the stock of uranium-235 and the different schemes developed by the nuclear power industry to exploit this resource. Second, we show that these methods, fast breeder and MOX fuel reactors, are not feasible. Third, we show that the argument that nuclear energy can be used to reduce CO2 emissions is false: the emissions from the increased water evaporation from nuclear power generation must be accounted for. In the case of Japan, water from nuclear power plants is drained into the surrounding sea, raising the water temperature which has an adverse affect on the immediate ecosystem, as well as increasing CO2 emissions from increased water evaporation from the sea. Next, a short exercise is used to show that nuclear power is not even needed to meet consumer demand in Japan. Such an exercise should be performed for any country considering the construction of additional nuclear power plants. Lastly, the paper is concluded with a discussion of the implications of our findings.
Resumo:
La Llei Integral de Mesures contra la Violència de Gènere estableix una sèrie de mesures legislatives que tenen per objecte el reconeixement dels drets de les víctimes de la violència, enfortir la sensibilitat ciutadana davant de la violència de gènere, instaurar un sistema de serveis socials d’atenció i de recuperació de les víctimes, garantir els seus drets econòmics i laborals, instaurar un sistema institucional de tutela, determinar un marc penal i processal i garantir la coordinació de tots aquells recursos que es dediquen a aquesta qüestió. Conscients de la importància d’aquesta Llei, de les dificultats que deriven de la seva aplicació, així com de la nostra experiència en la pràctica forense en casos de violència domèstica i de gènere, neix la idea de realitzar la recerca d’aquest estudi. Una recerca que pretén establir, a partir de l’anàlisi dels relats de dones víctimes de violència de gènere en la relació de parella, criteris de credibilitat que permetin validar les seves declaracions. Criteris obtinguts a partir de les mateixes paraules de les víctimes. Partint d’una metodologia mixta, quantitativa i qualitativa, hem obtingut uns resultats que ens han permès definir un sistema de criteris de credibilitat genuïns i discriminadors. Una discriminació obtinguda a partir de la comparació de les declaracions de dues mostres: víctimes reals i víctimes simulades. I, fruit d’aquesta comparació, la discontinuïtat narrativa, els detalls estranys, la reproducció de converses, les correccions espontànies, l’ambivalència i ambigüitat vers l’agressor, la violència indirecta, suscitant por i terror, la por a les represàlies, la imposició i intimitat del secret, la indefensió, l’evolució de la violència i progressió de la asimetria de poder, la dualitat de la conducta domèstica / imatge social de l’home, el control ampli masculí cognitiu-conductual, la descripció contextualitzada de microviolències, les estratègies de supervivència de la dona i el relat inhibit amb vergonya, han resultat criteris de credibilitat de les dones víctimes de violència de gènere que ens permetran validar el seu relat.
Resumo:
Esta investigación contrasta los relatos de dos muestras de mujeres. La primera, víctimas reales de violencia de género en la relación de pareja; la segunda, deliberadamente ofrecen relatos falsos sobre esta misma problemática. Los resultados muestran una diferencia cualitativa significativa en base a los criterios de credibilidad definidos. Al mismo tiempo, el estudio ha encontrado otros criterios de credibilidad genuinos, que conforman el discurso explicativo que permite validar sus declaraciones.
Resumo:
La migració internacional contemporània és integrada en un procés d'interconnexió global definit per les revolucions del transport i de les tecnologies de la informació i la comunicació. Una de les conseqüències d'aquesta interconnexió global és que les persones migrants tenen més capacitat per a processar informació tant abans com després de marxar. Aquests canvis podrien tenir implicacions inesperades per a la migració contemporània pel que fa a la capacitat de les persones migrants per a prendre decisions més informades, la reducció de la incertesa en contextos migratoris, el desdibuixament del concepte de distància o la decisió d'emigrar cap a llocs més llunyans. Aquesta recerca és important, ja que la manca de coneixement sobre aquesta qüestió podria contribuir a fer augmentar la distància entre els objectius de les polítiques de migració i els seus resultats. El paper que tenen els agents de la informació en els contextos migratoris també podria canviar. En aquest escenari, perquè les polítiques de migració siguin més efectives, s'haurà de tenir en compte la major capacitat de la població migrant de processar la informació i les fonts d'informació en què es confia. Aquest article demostra que l'equació més informació equival a més ben informat no es compleix sempre. Fins i tot en l'era de la informació, les fonts no fiables, les expectatives falses, la sobreinformació i els rumors encara són presents en els contextos migratoris. Tanmateix, defensem l'argument que aquests efectes no volguts es podrien reduir complint quatre requisits de la informació fiable: que sigui exhaustiva, que sigui rellevant, que s'hi confiï i que sigui actualitzada.
Resumo:
Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past. In this paper, the change detection techniques available nowadays for on-land application were analyzed and a method to automatically detect the changes in sequences of underwater images is proposed. Target application scenarios are habitat restoration sites, or area monitoring after sudden impacts from hurricanes or ship groundings. The method is based on the creation of a 3D terrain model from one image sequence over an area of interest. This model allows for synthesizing textured views that correspond to the same viewpoints of a second image sequence. The generated views are photometrically matched and corrected against the corresponding frames from the second sequence. Standard change detection techniques are then applied to find areas of difference. Additionally, the paper shows that it is possible to detect false positives, resulting from non-rigid objects, by applying the same change detection method to the first sequence exclusively. The developed method was able to correctly find the changes between two challenging sequences of images from a coral reef taken one year apart and acquired with two different cameras
Resumo:
The design of control, estimation or diagnosis algorithms most often assumes that all available process variables represent the system state at the same instant of time. However, this is never true in current network systems, because of the unknown deterministic or stochastic transmission delays introduced by the communication network. During the diagnosing stage, this will often generate false alarms. Under nominal operation, the different transmission delays associated with the variables that appear in the computation form produce discrepancies of the residuals from zero. A technique aiming at the minimisation of the resulting false alarms rate, that is based on the explicit modelling of communication delays and on their best-case estimation is proposed
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
Background: Searching for associations between genetic variants and complex diseases has been a very active area of research for over two decades. More than 51,000 potential associations have been studied and published, a figure that keeps increasing, especially with the recent explosion of array-based Genome-Wide Association Studies. Even if the number of true associations described so far is high, many of the putative risk variants detected so far have failed to be consistently replicated and are widely considered false positives. Here, we focus on the world-wide patterns of replicability of published association studies.Results: We report three main findings. First, contrary to previous results, genes associated to complex diseases present lower degrees of genetic differentiation among human populations than average genome-wide levels. Second, also contrary to previous results, the differences in replicability of disease associated-loci between Europeans and East Asians are highly correlated with genetic differentiation between these populations. Finally, highly replicated genes present increased levels of high-frequency derived alleles in European and Asian populations when compared to African populations. Conclusions: Our findings highlight the heterogeneous nature of the genetic etiology of complex disease, confirm the importance of the recent evolutionary history of our species in current patterns of disease susceptibility and could cast doubts on the status as false positives of some associations that have failed to replicate across populations.
Resumo:
Functional RNA structures play an important role both in the context of noncoding RNA transcripts as well as regulatory elements in mRNAs. Here we present a computational study to detect functional RNA structures within the ENCODE regions of the human genome. Since structural RNAs in general lack characteristic signals in primary sequence, comparative approaches evaluating evolutionary conservation of structures are most promising. We have used three recently introduced programs based on either phylogenetic–stochastic context-free grammar (EvoFold) or energy directed folding (RNAz and AlifoldZ), yielding several thousand candidate structures (corresponding to ∼2.7% of the ENCODE regions). EvoFold has its highest sensitivity in highly conserved and relatively AU-rich regions, while RNAz favors slightly GC-rich regions, resulting in a relatively small overlap between methods. Comparison with the GENCODE annotation points to functional RNAs in all genomic contexts, with a slightly increased density in 3′-UTRs. While we estimate a significant false discovery rate of ∼50%–70% many of the predictions can be further substantiated by additional criteria: 248 loci are predicted by both RNAz and EvoFold, and an additional 239 RNAz or EvoFold predictions are supported by the (more stringent) AlifoldZ algorithm. Five hundred seventy RNAz structure predictions fall into regions that show signs of selection pressure also on the sequence level (i.e., conserved elements). More than 700 predictions overlap with noncoding transcripts detected by oligonucleotide tiling arrays. One hundred seventy-five selected candidates were tested by RT-PCR in six tissues, and expression could be verified in 43 cases (24.6%).
Resumo:
The completion of the sequencing of the mouse genome promises to help predict human genes with greater accuracy. While current ab initio gene prediction programs are remarkably sensitive (i.e., they predict at least a fragment of most genes), their specificity is often low, predicting a large number of false-positive genes in the human genome. Sequence conservation at the protein level with the mouse genome can help eliminate some of those false positives. Here we describe SGP2, a gene prediction program that combines ab initio gene prediction with TBLASTX searches between two genome sequences to provide both sensitive and specific gene predictions. The accuracy of SGP2 when used to predict genes by comparing the human and mouse genomes is assessed on a number of data sets, including single-gene data sets, the highly curated human chromosome 22 predictions, and entire genome predictions from ENSEMBL. Results indicate that SGP2 outperforms purely ab initio gene prediction methods. Results also indicate that SGP2 works about as well with 3x shotgun data as it does with fully assembled genomes. SGP2 provides a high enough specificity that its predictions can be experimentally verified at a reasonable cost. SGP2 was used to generate a complete set of gene predictions on both the human and mouse by comparing the genomes of these two species. Our results suggest that another few thousand human and mouse genes currently not in ENSEMBL are worth verifying experimentally.
Resumo:
The recent availability of the chicken genome sequence poses the question of whether there are human protein-coding genes conserved in chicken that are currently not included in the human gene catalog. Here, we show, using comparative gene finding followed by experimental verification of exon pairs by RT–PCR, that the addition to the multi-exonic subset of this catalog could be as little as 0.2%, suggesting that we may be closing in on the human gene set. Our protocol, however, has two shortcomings: (i) the bioinformatic screening of the predicted genes, applied to filter out false positives, cannot handle intronless genes; and (ii) the experimental verification could fail to identify expression at a specific developmental time. This highlights the importance of developing methods that could provide a reliable estimate of the number of these two types of genes.
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.