44 resultados para respirazione, pattern recognition, apprendimento automatico, monitoraggio, segnali biomedici
em CentAUR: Central Archive University of Reading - UK
Resumo:
Pattern-recognition receptors (PRRs) detect molecular signatures of microbes and initiate immune responses to infection. Prototypical PRRs such as Toll-like receptors (TLRs) signal via a conserved pathway to induce innate response genes. In contrast, the signaling pathways engaged by other classes of putative PRRs remain ill defined. Here, we demonstrate that the β-glucan receptor Dectin-1, a yeast binding C type lectin known to synergize with TLR2 to induce TNFα and IL-12, can also promote synthesis of IL-2 and IL-10 through phosphorylation of the membrane proximal tyrosine in the cytoplasmic domain and recruitment of Syk kinase. syk−/− dendritic cells (DCs) do not make IL-10 or IL-2 upon yeast stimulation but produce IL-12, indicating that the Dectin-1/Syk and Dectin-1/TLR2 pathways can operate independently. These results identify a novel signaling pathway involved in pattern recognition by C type lectins and suggest a potential role for Syk kinase in regulation of innate immunity.
Resumo:
Numerous techniques exist which can be used for the task of behavioural analysis and recognition. Common amongst these are Bayesian networks and Hidden Markov Models. Although these techniques are extremely powerful and well developed, both have important limitations. By fusing these techniques together to form Bayes-Markov chains, the advantages of both techniques can be preserved, while reducing their limitations. The Bayes-Markov technique forms the basis of a common, flexible framework for supplementing Markov chains with additional features. This results in improved user output, and aids in the rapid development of flexible and efficient behaviour recognition systems.
Resumo:
This paper describes a proposed new approach to the Computer Network Security Intrusion Detection Systems (NIDS) application domain knowledge processing focused on a topic map technology-enabled representation of features of the threat pattern space as well as the knowledge of situated efficacy of alternative candidate algorithms for pattern recognition within the NIDS domain. Thus an integrative knowledge representation framework for virtualisation, data intelligence and learning loop architecting in the NIDS domain is described together with specific aspects of its deployment.
Resumo:
A new class of shape features for region classification and high-level recognition is introduced. The novel Randomised Region Ray (RRR) features can be used to train binary decision trees for object category classification using an abstract representation of the scene. In particular we address the problem of human detection using an over segmented input image. We therefore do not rely on pixel values for training, instead we design and train specialised classifiers on the sparse set of semantic regions which compose the image. Thanks to the abstract nature of the input, the trained classifier has the potential to be fast and applicable to extreme imagery conditions. We demonstrate and evaluate its performance in people detection using a pedestrian dataset.
Resumo:
For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.
Resumo:
Dendritic cells (DC) can produce Th-polarizing cytokines and direct the class of the adaptive immune response. Microbial stimuli, cytokines, chemokines, and T cell-derived signals all have been shown to trigger cytokine synthesis by DC, but it remains unclear whether these signals are functionally equivalent and whether they determine the nature of the cytokine produced or simply initiate a preprogrammed pattern of cytokine production, which may be DC subtype specific. Here, we demonstrate that microbial and T cell-derived stimuli can synergize to induce production of high levels of IL-12 p70 or IL-10 by individual murine DC subsets but that the choice of cytokine is dictated by the microbial pattern recognition receptor engaged. We show that bacterial components such as CpG-containing DNA or extracts from Mycobacterium tuberculosis predispose CD8alpha(+) and CD8alpha(-)CD4(-) DC to make IL-12 p70. In contrast, exposure of CD8alpha(+), CD4(+) and CD8alpha(-)CD4(-) DC to heat-killed yeasts leads to production of IL-10. In both cases, secretion of high levels of cytokine requires a second signal from T cells, which can be replaced by CD40 ligand. Consistent with their differential effects on cytokine production, extracts from M. tuberculosis promote IL-12 production primarily via Toll-like receptor 2 and an MyD88-dependent pathway, whereas heat-killed yeasts activate DC via a Toll-like receptor 2-, MyD88-, and Toll/IL-1R domain containing protein-independent pathway. These results show that T cell feedback amplifies innate signals for cytokine production by DC and suggest that pattern recognition rather than ontogeny determines the production of cytokines by individual DC subsets.
Resumo:
Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.
Resumo:
LIght Detection And Ranging (LIDAR) data for terrain and land surveying has contributed to many environmental, engineering and civil applications. However, the analysis of Digital Surface Models (DSMs) from complex LIDAR data is still challenging. Commonly, the first task to investigate LIDAR data point clouds is to separate ground and object points as a preparatory step for further object classification. In this paper, the authors present a novel unsupervised segmentation algorithm-skewness balancing to separate object and ground points efficiently from high resolution LIDAR point clouds by exploiting statistical moments. The results presented in this paper have shown its robustness and its potential for commercial applications.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Acute doses of Ginkgo biloba have been shown to improve attention and memory in young, healthy participants, but there has been a lack of investigation into possible effects on executive function. In addition, only one study has investigated the effects of chronic treatment in young volunteers. This study was conducted to compare the effects of ginkgo after acute and chronic treatment on tests of attention, memory and executive function in healthy university students. Using a placebo-controlled double-blind design, in experiment 1, 52 students were randomly allocated to receive a single dose of ginkgo (120 mg, n=26) or placebo (n=26), and were tested 4h later. In experiment 2, 40 students were randomly allocated to receive ginkgo (120 mg/day; n=20) or placebo (n=20) for a 6-week period and were tested at baseline and after 6 weeks of treatment. In both experiments, participants underwent tests of sustained attention, episodic and working memory, mental flexibility and planning, and completed mood rating scales. The acute dose of ginkgo significantly improved performance on the sustained-attention task and pattern-recognition memory task; however, there were no effects on working memory, planning, mental flexibility or mood. After 6 weeks of treatment, there were no significant effects of ginkgo on mood or any of the cognitive tests. In line with the literature, after acute administration ginkgo improved performance in tests of attention and memory. However, there were no effects after 6 weeks, suggesting that tolerance develops to the effects in young, healthy participants.
Resumo:
We argue that hyper-systemizing predisposes individuals to show talent, and review evidence that hyper-systemizing is part of the cognitive style of people with autism spectrum conditions (ASC). We then clarify the hyper-systemizing theory, contrasting it to the weak central coherence (WCC) and executive dysfunction (ED) theories. The ED theory has difficulty explaining the existence of talent in ASC. While both hyper-systemizing and WCC theories postulate excellent attention to detail, by itself excellent attention to detail will not produce talent. By contrast, the hyper-systemizing theory argues that the excellent attention to detail is directed towards detecting 'if p, then q' rules (or [input-operation-output] reasoning). Such law-based pattern recognition systems can produce talent in systemizable domains. Finally, we argue that the excellent attention to detail in ASC is itself a consequence of sensory hypersensitivity. We review an experiment from our laboratory demonstrating sensory hypersensitivity detection thresholds in vision. We conclude that the origins of the association between autism and talent begin at the sensory level, include excellent attention to detail and end with hyper-systemizing.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.