40 resultados para automatic test and measurement
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
A large volume of visual content is inaccessible until effective and efficient indexing and retrieval of such data is achieved. In this paper, we introduce the DREAM system, which is a knowledge-assisted semantic-driven context-aware visual information retrieval system applied in the film post production domain. We mainly focus on the automatic labelling and topic map related aspects of the framework. The use of the context- related collateral knowledge, represented by a novel probabilistic based visual keyword co-occurrence matrix, had been proven effective via the experiments conducted during system evaluation. The automatically generated semantic labels were fed into the Topic Map Engine which can automatically construct ontological networks using Topic Maps technology, which dramatically enhances the indexing and retrieval performance of the system towards an even higher semantic level.
Resumo:
This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this article, we provide an initial insight into the study of MI and what it means for a machine to be intelligent. We discuss how MI has progressed to date and consider future scenarios in a realistic and logical way as much as possible. To do this, we unravel one of the major stumbling blocks to the study of MI, which is the field that has become widely known as "artificial intelligence"
Resumo:
Background: Insulin sensitivity (Si) is improved by weight loss and exercise, but the effects of the replacement of saturated fatty acids (SFAs) with monounsaturated fatty acids (MUFAs) or carbohydrates of high glycemic index (HGI) or low glycemic index (LGI) are uncertain. Objective: We conducted a dietary intervention trial to study these effects in participants at risk of developing metabolic syndrome. Design: We conducted a 5-center, parallel design, randomized controlled trial [RISCK (Reading, Imperial, Surrey, Cambridge, and Kings)]. The primary and secondary outcomes were changes in Si (measured by using an intravenous glucose tolerance test) and cardiovascular risk factors. Measurements were made after 4 wk of a high-SFA and HGI (HS/HGI) diet and after a 24-wk intervention with HS/HGI (reference), high-MUFA and HGI (HM/HGI), HM and LGI (HM/LGI), low-fat and HGI (LF/HGI), and LF and LGI (LF/LGI) diets. Results: We analyzed data for 548 of 720 participants who were randomly assigned to treatment. The median Si was 2.7 × 10−4 mL · μU−1 · min−1 (interquartile range: 2.0, 4.2 × 10−4 mL · μU−1 · min−1), and unadjusted mean percentage changes (95% CIs) after 24 wk treatment (P = 0.13) were as follows: for the HS/HGI group, −4% (−12.7%, 5.3%); for the HM/HGI group, 2.1% (−5.8%, 10.7%); for the HM/LGI group, −3.5% (−10.6%, 4.3%); for the LF/HGI group, −8.6% (−15.4%, −1.1%); and for the LF/LGI group, 9.9% (2.4%, 18.0%). Total cholesterol (TC), LDL cholesterol, and apolipoprotein B concentrations decreased with SFA reduction. Decreases in TC and LDL-cholesterol concentrations were greater with LGI. Fat reduction lowered HDL cholesterol and apolipoprotein A1 and B concentrations. Conclusions: This study did not support the hypothesis that isoenergetic replacement of SFAs with MUFAs or carbohydrates has a favorable effect on Si. Lowering GI enhanced reductions in TC and LDL-cholesterol concentrations in subjects, with tentative evidence of improvements in Si in the LF-treatment group. This trial was registered at clinicaltrials.gov as ISRCTN29111298.
Resumo:
*** Purpose – Computer tomography (CT) for 3D reconstruction entails a huge number of coplanar fan-beam projections for each of a large number of 2D slice images, and excessive radiation intensities and dosages. For some applications its rate of throughput is also inadequate. A technique for overcoming these limitations is outlined. *** Design methodology/approach – A novel method to reconstruct 3D surface models of objects is presented, using, typically, ten, 2D projective images. These images are generated by relative motion between this set of objects and a set of ten fanbeam X-ray sources and sensors, with their viewing axes suitably distributed in 2D angular space. *** Findings – The method entails a radiation dosage several orders of magnitude lower than CT, and requires far less computational power. Experimental results are given to illustrate the capability of the technique *** Practical implications – The substantially lower cost of the method and, more particularly, its dramatically lower irradiation make it relevant to many applications precluded by current techniques *** Originality/value – The method can be used in many applications such as aircraft hold-luggage screening, 3D industrial modelling and measurement, and it should also have important applications to medical diagnosis and surgery.
Resumo:
A total of 1527 serum samples from pigs, goats, sheep, cattle and dogs in Greece were examined by the microscopic agglutination test and 11.8 per cent of them had antibodies against one or more Leptospira serovars at titres of 1/100 or more. The predominant serovar affecting farm animal species was Bratislava, and Copenhageni was common among dogs and the second most important serovar when all animals were considered together. Another prevalent serovar was Australis, but antibodies to Pomona were detected only in goats and cattle.
Resumo:
We describe a one-port de-embedding technique suitable for the quasi-optical characterization of terahertz integrated components at frequencies beyond the operational range of most vector network analyzers. This technique is also suitable when the manufacturing of precision terminations to sufficiently fine tolerances for the application of a TRL de-embedding technique is not possible. The technique is based on vector reflection measurements of a series of easily realizable test pieces. A theoretical analysis is presented for the precision of the technique when implemented using a quasi-optical null-balanced bridge reflectometer. The analysis takes into account quantization effects in the linear and angular encoders associated with the balancing procedure, as well as source power and detector noise equivalent power. The precision in measuring waveguide characteristic impedance and attenuation using this de-embedding technique is further analyzed after taking into account changes in the power coupled due to axial, rotational, and lateral alignment errors between the device under test and the instruments' test port. The analysis is based on the propagation of errors after assuming imperfect coupling of two fundamental Gaussian beams. The required precision in repositioning the samples at the instruments' test-port is discussed. Quasi-optical measurements using the de-embedding process for a WR-8 adjustable precision short at 125 GHz are presented. The de-embedding methodology may be extended to allow the determination of S-parameters of arbitrary two-port junctions. The measurement technique proposed should prove most useful above 325 GHz where there is a lack of measurement standards.
Resumo:
We use Hasbrouck's (1991) vector autoregressive model for prices and trades to empirically test and assess the role played by the waiting time between consecutive transactions in the process of price formation. We find that as the time duration between transactions decreases, the price impact of trades, the speed of price adjustment to trade‐related information, and the positive autocorrelation of signed trades all increase. This suggests that times when markets are most active are times when there is an increased presence of informed traders; we interpret such markets as having reduced liquidity.
Resumo:
Background: Jargon aphasia is one of the most intractable forms of aphasia with limited recommendation on amelioration of associated naming difficulties and neologisms. The few naming therapy studies that exist in jargon aphasia have utilized either semantic or phonological approaches but the results have been equivocal. Moreover, the effect of therapy on characteristics of neologisms is less explored. Aims: This study investigates the effectiveness of a phonological naming therapy (i.e., phonological component analysis, PCA) on picture naming abilities and on quantitative and qualitative changes in neologisms for an individual with jargon aphasia (FF). Methods: FF showed evidence of jargon aphasia with severe naming difficulties and produced a very high proportion of neologisms. A single-subject multiple probe design across behaviors was employed to evaluate the effects of PCA therapy on the accuracy for three sets of words. In therapy, a phonological components analysis chart was used to identify five phonological components (i.e., rhymes, first sound, first sound associate, final sound, number of syllables) for each target word. Generalization effects—change in percent accuracy and error pattern—were examined comparing pre-and post-therapy responses on the Philadelphia Naming Test and these responses were analyzed to explore the characteristics of the neologisms. The quantitative change in neologisms was measured by change in the proportion of neologisms from pre- to post-therapy and the qualitative change was indexed by the phonological overlap between target and neologism. Results: As a consequence of PCA therapy, FF showed a significant improvement in his ability to name the treated items. His performance in maintenance and follow-up phases remained comparable to his performance during the therapy phases. Generalization to other naming tasks did not show a change in accuracy but distinct differences in error pattern (an increase in proportion of real word responses and a decrease in proportion of neologisms) were observed. Notably, the decrease in neologisms occurred with a corresponding trend for increase in the phonological similarity between the neologisms and the targets. Conclusions: This study demonstrated the effectiveness of a phonological therapy for improving naming abilities and reducing the amount of neologisms in an individual with severe jargon aphasia. The positive outcome of this research is encouraging, as it provides evidence for effective therapies for jargon aphasia and also emphasizes that use of the quality and quantity of errors may provide a sensitive outcome measure to determine therapy effectiveness, in particular for client groups who are difficult to treat.