959 resultados para False anglicisms


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various fall-detection solutions have been previously proposed to create a reliable surveillance system for elderly people with high requirements on accuracy, sensitivity and specificity. In this paper, an enhanced fall detection system is proposed for elderly person monitoring that is based on smart sensors worn on the body and operating through consumer home networks. With treble thresholds, accidental falls can be detected in the home healthcare environment. By utilizing information gathered from an accelerometer, cardiotachometer and smart sensors, the impacts of falls can be logged and distinguished from normal daily activities. The proposed system has been deployed in a prototype system as detailed in this paper. From a test group of 30 healthy participants, it was found that the proposed fall detection system can achieve a high detection accuracy of 97.5%, while the sensitivity and specificity are 96.8% and 98.1% respectively. Therefore, this system can reliably be developed and deployed into a consumer product for use as an elderly person monitoring device with high accuracy and a low false positive rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Paraconsistent logics are non-classical logics which allow non-trivial and consistent reasoning about inconsistent axioms. They have been pro- posed as a formal basis for handling inconsistent data, as commonly arise in human enterprises, and as methods for fuzzy reasoning, with applica- tions in Artificial Intelligence and the control of complex systems. Formalisations of paraconsistent logics usually require heroic mathe- matical efforts to provide a consistent axiomatisation of an inconsistent system. Here we use transreal arithmetic, which is known to be consis- tent, to arithmetise a paraconsistent logic. This is theoretically simple and should lead to efficient computer implementations. We introduce the metalogical principle of monotonicity which is a very simple way of making logics paraconsistent. Our logic has dialetheaic truth values which are both False and True. It allows contradictory propositions, allows variable contradictions, but blocks literal contradictions. Thus literal reasoning, in this logic, forms an on-the- y, syntactic partition of the propositions into internally consistent sets. We show how the set of all paraconsistent, possible worlds can be represented in a transreal space. During the development of our logic we discuss how other paraconsistent logics could be arithmetised in transreal arithmetic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two experiments examined the extent to which erroneous recall blocks veridical recall using, as a vehicle for study, the disruptive impact of distractors that are semantically similar to a list of words presented for free recall. Instructing participants to avoid erroneous recall of to-be-ignored spoken distractors attenuated their recall but this did not influence the disruptive effect of those distractors on veridical recall (Experiment 1). Using an externalised output-editing procedure—whereby participants recalled all items that came to mind and identified those that were erroneous—the usual between-sequence semantic similarity effect on erroneous and veridical recall was replicated but the relationship between the rate of erroneous and veridical recall was weak (Experiment 2). The results suggest that forgetting is not due to veridical recall being blocked by similar events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The old scholastic principle of the "convertibility" of being and goodness strikes nearly all moderns as either barely comprehensible or plain false. "Convertible" is a term of art meaning "interchangeable" in respect of predication, where the predicates can be exchanged salva veritate albeit not salva sensu: their referents are, as the maxim goes, really the same albeit conceptually different. The principle seems, at first blush, absurd. Did the scholastics literally mean that every being is good? Is that supposed to include a cancer, a malaria parasite, an earthquake that kills millions? If every being is good, then no being is bad—but how can that be? To the contemporary philosophical mind, such bafflement is understandable. It derives from the systematic dismantling of the great scholastic edifice that took place over half a millennium. With the loss of the basic concepts out of which that edifice was built, the space created by those concepts faded out of existence as well. The convertibility principle, like virtually all the other scholastic principles (not all, since some do survive and thrive in analytic philosophy), could not persist in a post-scholastic space wholly alien to it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

John Broome has argued that value incommensurability is vagueness, by appeal to a controversial ‘collapsing principle’ about comparative indeterminacy. I offer a new counterexample to the collapsing principle. That principle allows us to derive an outright contradiction from the claim that some object is a borderline case of some predicate. But if there are no borderline cases, then the principle is empty. The collapsing principle is either false or empty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. Weather forecasts using multiple NWPs from various weather centres implemented on catchment hydrology can provide significantly improved early flood warning. The availability of global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble’ (TIGGE) offers a new opportunity for the development of state-of-the-art early flood forecasting systems. This paper presents a case study using the TIGGE database for flood warning on a meso-scale catchment (4062 km2) located in the Midlands region of England. For the first time, a research attempt is made to set up a coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE database. The study shows that precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial precipitation variability on such a comparatively small catchment, which indicates need to improve NWPs resolution and/or disaggregating techniques to narrow down the spatial gap between meteorology and hydrology. The spread of discharge forecasts varies from centre to centre, but it is generally large and implies a significant level of uncertainties. Nevertheless, the results show the TIGGE database is a promising tool to forecast flood inundation, comparable with that driven by raingauge observation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: MS-based proteomics was applied to the analysis of the medicinal plant Artemisia annua, exploiting a recently published contig sequence database (Graham et al. (2010) Science 327, 328–331) and other genomic and proteomic sequence databases for comparison. A. annua is the predominant natural source of artemisinin, the precursor for artemisinin-based combination therapies (ACTs), which are the WHO-recommended treatment for P. falciparum malaria. Results: The comparison of various databases containing A. annua sequences (NCBInr/viridiplantae, UniProt/ viridiplantae, UniProt/A. annua, an A. annua trichome Trinity contig database, the above contig database and another A. annua EST database) revealed significant differences in respect of their suitability for proteomic analysis, showing that an organism-specific database that has undergone extensive curation, leading to longer contig sequences, can greatly increase the number of true positive protein identifications, while reducing the number of false positives. Compared to previously published data an order-of-magnitude more proteins have been identified from trichome-enriched A. annua samples, including proteins which are known to be involved in the biosynthesis of artemisinin, as well as other highly abundant proteins, which suggest additional enzymatic processes occurring within the trichomes that are important for the biosynthesis of artemisinin. Conclusions: The newly gained information allows for the possibility of an enzymatic pathway, utilizing peroxidases, for the less well understood final stages of artemisinin’s biosynthesis, as an alternative to the known non-enzymatic in vitro conversion of dihydroartemisinic acid to artemisinin. Data are available via ProteomeXchange with identifier PXD000703.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People are often exposed to more information than they can actually remember. Despite this frequent form of information overload, little is known about how much information people choose to remember. Using a novel “stop” paradigm, the current research examined whether and how people choose to stop receiving new—possibly overwhelming—information with the intent to maximize memory performance. Participants were presented with a long list of items and were rewarded for the number of correctly remembered words in a following free recall test. Critically, participants in a stop condition were provided with the option to stop the presentation of the remaining words at any time during the list, whereas participants in a control condition were presented with all items. Across five experiments, we found that participants tended to stop the presentation of the items to maximize the number of recalled items, but this decision ironically led to decreased memory performance relative to the control group. This pattern was consistent even after controlling for possible confounding factors (e.g., task demands). The results indicated a general, false belief that we can remember a larger number of items if we restrict the quantity of learning materials. These findings suggest people have an incomplete understanding of how we remember excessive amounts of information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide a new legal perspective for the antitrust analysis of margin squeeze conducts. Building on recent economic analysis, we explain why margin squeeze conducts should solely be evaluated under adjusted predatory pricing standards. The adjustment corresponds to an increase in the cost benchmark used in the predatory pricing test by including opportunity costs due to missed upstream sales. This can reduce both the risks of false-positives and false-negatives in margin squeeze cases. We justify this approach by explaining why classic arguments against above-cost predatory pricing typically do not hold in vertical structures where margin squeezes take place and by presenting case law evidence supporting this adjustment. Our approach can help to reconcile the divergent US and EU antitrust stances on margin squeeze.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the findings of the pragmatic abilities of Greek-speaking children with autism spectrum disorders (ASD). Twenty high functioning children with ASD and their typically developing age and vocabulary controls were administered a pragmatics task. The task was based on the Diagnostic Evaluation of Language Variation (DELV) in the context of a larger study targeting the grammar of Greek-speaking children with autism, and assessed the children’s abilities in communicative role taking, narrative, and question asking. The children with ASD showed an uneven profile in their pragmatic abilities. The two groups did not differ in communicative role taking and question asking. However, the children with ASD had difficulties on the narrative task, and more specifically, on the items assessing reference contrast and temporal links. Yet, they performed similarly on the mental state representations and the false beliefs items. Despite their good performance on mental states and false beliefs, the ASD children’s lower performance on reference contrast can be interpreted via Theory of Mind deficits if we assume that the former involve an additional level of complexity; namely, quantifying the amount of information available to the listener. Lower performance on temporal links is in line with the ASD children’s attested difficulties in organizing events into a coherent gist. Their overall profile, and, in particular, the dissociation between the different sections of the task, does not support single deficit accounts. It rather indicates that the deficits of individuals with ASD stem from distinct deficits in core cognitive processes (Happé & Frith, 2006).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.