901 resultados para probabilistic roadmap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language processing is an example of implicit learning of multiple statistical cues that provide probabilistic information regarding word structure and use. Much of the current debate about language embodiment is devoted to how action words are represented in the brain, with motor cortex activity evoked by these words assumed to selectively reflect conceptual content and/or its simulation. We investigated whether motor cortex activity evoked by manual action words (e.g., caress) might reflect sensitivity to probabilistic orthographic-phonological cues to grammatical category embedded within individual words. We first review neuroimaging data demonstrating that nonwords evoke activity much more reliably than action words along the entire motor strip, encompassing regions proposed to be action category specific. Using fMRI, we found that disyllabic words denoting manual actions evoked increased motor cortex activity compared with non-body-part-related words (e.g., canyon), activity which overlaps that evoked by observing and executing hand movements. This result is typically interpreted in support of language embodiment. Crucially, we also found that disyllabic nonwords containing endings with probabilistic cues predictive of verb status (e.g., -eve) evoked increased activity compared with nonwords with endings predictive of noun status (e.g., -age) in the identical motor area. Thus, motor cortex responses to action words cannot be assumed to selectively reflect conceptual content and/or its simulation. Our results clearly demonstrate motor cortex activity reflects implicit processing of ortho-phonological statistical regularities that help to distinguish a word's grammatical class.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic analysis of diffusion tensor images (DTI) shows great promise in revealing specific genetic variants that affect brain integrity and connectivity. Most genetic studies of DTI analyze voxel-based diffusivity indices in the image space (such as 3D maps of fractional anisotropy) and overlook tract geometry. Here we propose an automated workflow to cluster fibers using a white matter probabilistic atlas and perform genetic analysis on the shape characteristics of fiber tracts. We apply our approach to large study of 4-Tesla high angular resolution diffusion imaging (HARDI) data from 198 healthy, young adult twins (age: 20-30). Illustrative results show heritability for the shapes of several major tracts, as color-coded maps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractional anisotropy (FA), a very widely used measure of fiber integrity based on diffusion tensor imaging (DTI), is a problematic concept as it is influenced by several quantities including the number of dominant fiber directions within each voxel, each fiber's anisotropy, and partial volume effects from neighboring gray matter. High-angular resolution diffusion imaging (HARDI) can resolve more complex diffusion geometries than standard DTI, including fibers crossing or mixing. The tensor distribution function (TDF) can be used to reconstruct multiple underlying fibers per voxel, representing the diffusion profile as a probabilistic mixture of tensors. Here we found that DTIderived mean diffusivity (MD) correlates well with actual individual fiber MD, but DTI-derived FA correlates poorly with actual individual fiber anisotropy, and may be suboptimal when used to detect disease processes that affect myelination. Analysis of the TDFs revealed that almost 40% of voxels in the white matter had more than one dominant fiber present. To more accurately assess fiber integrity in these cases, we here propose the differential diffusivity (DD), which measures the average anisotropy based on all dominant directions in each voxel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent models of language comprehension have assumed a tight coupling between the semantic representations of action words and cortical motor areas. We combined functional MRI with cytoarchitectonically defined probabilistic maps of left hemisphere primary and premotor cortices to analyse responses of functionally delineated execution- and observation-related regions during comprehension of action word meanings associated with specific effectors (e.g., punch, bite or stomp) and processing of items with various levels of lexical information (non body part-related meanings, nonwords, and visual character strings). The comprehension of effector specific action word meanings did not elicit preferential activity corresponding to the somatotopic organisation of effectors in either primary or premotor cortex. However, generic action word meanings did show increased BOLD signal responses compared to all other classes of lexical stimuli in the pre-SMA. As expected, the majority of the BOLD responses elicited by the lexical stimuli were in association cortex adjacent to the motor areas. We contrast our results with those of previous studies reporting significant effects for only 1 or 2 effectors outside cytoarchitectonically defined motor regions and discuss the importance of controlling for potentially confounding lexical variables such as imageability. We conclude that there is no strong evidence for a somatotopic organisation of action word meaning representations and argue the pre-SMA might have a role in maintaining abstract representations of action words as instructional cues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As connectivity analyses become more popular, claims are often made about how the brain's anatomical networks depend on age, sex, or disease. It is unclear how results depend on tractography methods used to compute fiber networks. We applied 11 tractography methods to high angular resolution diffusion images of the brain (4-Tesla 105-gradient HARDI) from 536 healthy young adults. We parcellated 70 cortical regions, yielding 70×70 connectivity matrices, encoding fiber density. We computed popular graph theory metrics, including network efficiency, and characteristic path lengths. Both metrics were robust to the number of spherical harmonics used to model diffusion (4th-8th order). Age effects were detected only for networks computed with the probabilistic Hough transform method, which excludes smaller fibers. Sex and total brain volume affected networks measured with deterministic, tensor-based fiber tracking but not with the Hough method. Each tractography method includes different fibers, which affects inferences made about the reconstructed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractional anisotropy (FA), a very widely used measure of fiber integrity based on diffusion tensor imaging (DTI), is a problematic concept as it is influenced by several quantities including the number of dominant fiber directions within each voxel, each fiber's anisotropy, and partial volume effects from neighboring gray matter. With High-angular resolution diffusion imaging (HARDI) and the tensor distribution function (TDF), one can reconstruct multiple underlying fibers per voxel and their individual anisotropy measures by representing the diffusion profile as a probabilistic mixture of tensors. We found that FA, when compared with TDF-derived anisotropy measures, correlates poorly with individual fiber anisotropy, and may sub-optimally detect disease processes that affect myelination. By contrast, mean diffusivity (MD) as defined in standard DTI appears to be more accurate. Overall, we argue that novel measures derived from the TDF approach may yield more sensitive and accurate information than DTI-derived measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an unmanned aircraft system (UAS) that uses a probabilistic model for autonomous front-on environmental sensing or photography of a target. The system is based on low-cost and readily-available sensor systems in dynamic environments and with the general intent of improving the capabilities of dynamic waypoint-based navigation systems for a low-cost UAS. The behavioural dynamics of target movement for the design of a Kalman filter and Markov model-based prediction algorithm are included. Geometrical concepts and the Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of a target, thus delivering a new waypoint for autonomous navigation. The results of the application to aerial filming with low-cost UAS are presented, achieving the desired goal of maintained front-on perspective without significant constraint to the route or pace of target movement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the results of a project aimed at creating a research-informed, pedagogically reliable, technology-enhanced learning and teaching environment that would foster engagement with learning. A first-year mathematics for engineering unit offered at a large, metropolitan Australian university provides the context for this research. As part of the project, the unit was redesigned using a framework that employed flexible, modular, connected e-learning and teaching experiences. The researchers, interested in an ecological perspective on educational processes, grounded the redesign principles in probabilistic learning design (Kirschner et al., 2004). The effectiveness of the redesigned environment was assessed through the lens of the notion of affordance (Gibson, 1977,1979, Greeno, 1994, Good, 2007). A qualitative analysis of the questionnaire distributed to students at the end of the teaching period provided insight into factors impacting on the successful creation of an environment that encourages complex, multidimensional and multilayered interactions conducive to learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A key component of robotic path planning is ensuring that one can reliably navigate a vehicle to a desired location. In addition, when the features of interest are dynamic and move with oceanic currents, vehicle speed plays an important role in the planning exercise to ensure that vehicles are in the right place at the right time. Aquatic robot design is moving towards utilizing the environment for propulsion rather than traditional motors and propellers. These new vehicles are able to realize significantly increased endurance, however the mission planning problem, in turn, becomes more difficult as the vehicle velocity is not directly controllable. In this paper, we examine Gaussian process models applied to existing wave model data to predict the behavior, i.e., velocity, of a Wave Glider Autonomous Surface Vehicle. Using training data from an on-board sensor and forecasting with the WAVEWATCH III model, our probabilistic regression models created an effective method for forecasting WG velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simulation model (PCPF-B) was developed based on the PCPF-1 model to predict the runoff of pesticides from paddy plots to a drainage canal in a paddy block. The block-scale model now comprises three modules: (1) a module for pesticide application, (2) a module for pesticide behavior in paddy fields, and (3) a module for pesticide concentration in the drainage canal. The PCPF-B model was first evaluated by published data in a single plot and then was applied to predict the concentration of bensulfuron-methyl in one paddy block in the Sakura river basin, Ibaraki, Japan, where a detailed field survey was conducted. The PCPF-B model simulated well the behavior of bensulfuron-methyl in individual paddy plots. It also reflected the runoff pattern of bensulfuron-methyl at the block outlet, although overestimation of bensulfuronmethyl concentrations occurred due to uncertainty in water balance estimation. Application of water management practice such as water-holding period and seepage control also affected the performance of the model. A probabilistic approach may be necessary for a comprehensive risk assessment in large-scale paddy areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Monitoring studies revealed high concentrations of pesticides in the drainage canal of paddy fields. It is important to have a way to predict these concentrations in different management scenarios as an assessment tool. A simulation model for predicting the pesticide concentration in a paddy block (PCPF-B) was evaluated and then used to assess the effect of water management practices for controlling pesticide runoff from paddy fields. RESULTS: The PCPF-B model achieved an acceptable performance. The model was applied to a constrained probabilistic approach using the Monte Carlo technique to evaluate the best management practices for reducing runoff of pretilachlor into the canal. The probabilistic model predictions using actual data of pesticide use and hydrological data in the canal showed that the water holding period (WHP) and the excess water storage depth (EWSD) effectively reduced the loss and concentration of pretilachlor from paddy fields to the drainage canal. The WHP also reduced the timespan of pesticide exposure in the drainage canal. CONCLUSIONS: It is recommended that: (1) the WHP be applied for as long as possible, but for at least 7 days, depending on the pesticide and field conditions; (2) an EWSD greater than 2 cm be maintained to store substantial rainfall in order to prevent paddy runoff, especially during the WHP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new high precision focused word sense disambiguation (WSD) approach is proposed, which not only attempts to identify the proper sense for a word but also provides the probabilistic evaluation for the identification confidence at the same time. A novel Instance Knowledge Network (IKN) is built to generate and maintain semantic knowledge at the word, type synonym set and instance levels. Related algorithms based on graph matching are developed to train IKN with probabilistic knowledge and to use IKN for probabilistic word sense disambiguation. Based on the Senseval-3 all-words task, we run extensive experiments to show the performance enhancements in different precision ranges and the rationality of probabilistic based automatic confidence evaluation of disambiguation. We combine our WSD algorithm with five best WSD algorithms in senseval-3 all words tasks. The results show that the combined algorithms all outperform the corresponding algorithms.