940 resultados para Automatic thresholding
Resumo:
This paper presents a solution to part of the problem of making robotic or semi-robotic digging equipment less dependant on human supervision. A method is described for identifying rocks of a certain size that may affect digging efficiency or require special handling. The process involves three main steps. First, by using range and intensity data from a time-of-flight (TOF) camera, a feature descriptor is used to rank points and separate regions surrounding high scoring points. This allows a wide range of rocks to be recognized because features can represent a whole or just part of a rock. Second, these points are filtered to extract only points thought to belong to the large object. Finally, a check is carried out to verify that the resultant point cloud actually represents a rock. Results are presented from field testing on piles of fragmented rock. Note to Practitioners—This paper presents an algorithm to identify large boulders in a pile of broken rock as a step towards an autonomous mining dig planner. In mining, piles of broken rock can contain large fragments that may need to be specially handled. To assess rock piles for excavation, we make use of a TOF camera that does not rely on external lighting to generate a point cloud of the rock pile. We then segment large boulders from its surface by using a novel feature descriptor and distinguish between real and false boulder candidates. Preliminary field experiments show promising results with the algorithm performing nearly as well as human test subjects.
Resumo:
Increasingly in power systems, there is a trend towards the sharing of reserves and integration of markets over wide areas in order to enable increased penetration of renewable sources in interconnected power systems. In this paper, a number of simple PI and gain based Model Predictive Control algorithms are proposed for Automatic Generation Control in AC areas connected to Multi-Terminal Direct Current grids. The paper discusses how this approach improves the sharing of secondary reserves and could assist in achieving EU energy targets for 2030 and beyond.
Resumo:
[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?
Resumo:
[EN]Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?
Resumo:
This study examined the properties of ERP effects elicited by unattended (spatially uncued) objects using a short-lag repetition-priming paradigm. Same or different common objects were presented in a yoked prime-probe trial either as intact images or slightly scrambled (half-split) versions. Behaviourally, only objects in a familiar (intact) view showed priming. An enhanced negativity was observed at parietal and occipito-parietal electrode sites within the time window of the posterior N250 after the repetition of intact, but not split, images. An additional post-hoc N2pc analysis of the prime display supported that this result could not be attributed to differences in salience between familiar intact and split views. These results demonstrate that spatially unattended objects undergo visual processing but only if shown in familiar views, indicating a role of holistic processing of objects that is independent of attention.
Resumo:
Abstract not available
Resumo:
The availability of CFD software that can easily be used and produce high efficiency on a wide range of parallel computers is extremely limited. The investment and expertise required to parallelise a code can be enormous. In addition, the cost of supercomputers forces high utilisation to justify their purchase, requiring a wide range of software. To break this impasse, tools are urgently required to assist in the parallelisation process that dramatically reduce the parallelisation time but do not degrade the performance of the resulting parallel software. In this paper we discuss enhancements to the Computer Aided Parallelisation Tools (CAPTools) to assist in the parallelisation of complex unstructured mesh-based computational mechanics codes.
Resumo:
BACKGROUND: Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. METHODS: Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and myocardial flow reserve (MFR) polar maps were quantified and analyzed using a 17-segment model. Comparisons used Pearson's correlation ρ (measuring precision), Bland and Altman limit-of-agreement and Lin's concordance correlation ρc = ρ·C b (C b measuring systematic bias). RESULTS: Lin's concordance and Pearson's correlation values were very similar, suggesting no systematic bias between software packages with an excellent precision ρ for MBF (ρ = 0.97, ρc = 0.96, C b = 0.99) and good precision for MFR (ρ = 0.83, ρc = 0.76, C b = 0.92). On a per-segment basis, no mean bias was observed on Bland-Altman plots, although PMOD provided slightly higher values than FlowQuant at higher MBF and MFR values (P < .0001). CONCLUSIONS: Concordance between software packages was excellent for MBF and MFR, despite higher values by PMOD at higher MBF values. Both software packages can be used interchangeably for quantification in daily practice of Rb-82 cardiac PET.
Resumo:
This paper presents a new dynamic load balancing technique for structured mesh computational mechanics codes in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to using coincidental limits in all of the partitioned dimensions. The partition range limits are 'staggered', allowing greater flexibility in obtaining a balanced load distribution in comparison to when the limits are changed 'globally'. as the load increase/decrease on one processor no longer restricts the load decrease/increase on a neighbouring processor. The automatic implementation of this 'staggered' load balancing strategy within an existing parallel code is presented in this paper, along with some preliminary results.