839 resultados para automated


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In designing new product the ability to retrieve drawings of existing components is important if costs are to be controlled by preventing unnecessary duplication if parts. Component coding and classification systems have been used successfully for these purposes but suffer from high operational costs and poor usability arising directly from the manual nature of the coding process itself. A new version of an existing coding system (CAMAC) has been developed to reduce costs by automatically coding engineering drawings. Usability is improved be supporting searches based on a drawing or sketch of the desired component. Test results from a database of several thousand drawings are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The G-protein coupled receptors--or GPCRs--comprise simultaneously one of the largest and one of the most multi-functional protein families known to modern-day molecular bioscience. From a drug discovery and pharmaceutical industry perspective, the GPCRs constitute one of the most commercially and economically important groups of proteins known. The GPCRs undertake numerous vital metabolic functions and interact with a hugely diverse range of small and large ligands. Many different methodologies have been developed to efficiently and accurately classify the GPCRs. These range from motif-based techniques to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of sequences. We review here the available methodologies for the classification of GPCRs. Part of this work focuses on how we have tried to build the intrinsically hierarchical nature of sequence relations, implicit within the family, into an adaptive approach to classification. Importantly, we also allude to some of the key innate problems in developing an effective approach to classifying the GPCRs: the lack of sequence similarity between the six classes that comprise the GPCR family and the low sequence similarity to other family members evinced by many newly revealed members of the family.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the advent of High Level Programming languages (HLPLs) in the early 1950s researchers have sought ways to automate the construction of HLPL compilers. To this end a variety of Translator Writing Tools (TWTs) have been developed in the last three decades. However, only a very few of these tools have gained significant commercial acceptance. This thesis re-examines traditional compiler construction techniques, along with a number of previous TWTs, and proposes a new improved tool for automated compiler construction called the Aston Compiler Constructor (ACC). This new tool allows the specification of complete compilation systems using a high level compiler oriented specification notation called the Compiler Construction Language (CCL). This specification notation is based on a modern variant of Backus Naur Form (BNF) and an extended variant of Attribute Grammars (AGs). The implementation and processing of the CCL is discussed along with an extensive CCL example. The CCL is shown to have an extensive expressive power, to be convenient in use, and highly readable, and thus a superior alternative to earlier TWTs, and to traditional compiler construction techniques. The execution performance of CCL specifications is evaluated and shown to be acceptable. A number of related areas are also addressed, including tools for the rapid construction of individual compiler components, and tools for the construction of compilation systems for multiprocessor operating systems and hardware. This latter area is expected to become of particular interest in future years due to the anticipated increased use of multiprocessor architectures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of N1-benzylideneheteroarylcarboxamidrazones was prepared in an automated fashion, and tested against Mycobacterium fortuitum in a rapid screen for antimycobacterial activity. Many of the compounds from this series were also tested against Mycobacterium tuberculosis, and the usefulness as M.fortuitum as a rapid, initial screen for anti-tubercular activity evaluated. Various deletions were made to the N1-benzylideneheteroarylcarboxamidrazone structure in order to establish the minimum structural requirements for activity. The N1-benzylideneheteroarylcarbox-amidrazones were then subjected to molecular modelling studies and their activities against M.fortuitum and M.tuberculosis were analysed using quantitative structure-analysis relationship (QSAR) techniques in the computational package TSAR (Oxford Molecular Ltd.). A set of equations predictive of antimycobacterial activity was hereby obtained. The series of N1-benzylidenehetero-arylcarboxamidrazones was also tested against a multidrug-resistant strain of Staphylococcus aureus (MRSA), followed by a panel of Gram-positive and Gram-negative bacteria, if activity was observed for MRSA. A set of antimycobacterial N1-benzylideneheteroarylcarboxamidrazones was hereby discovered, the best of which had MICs against m. fortuitum in the range 4-8μgml-1 and displayed 94% inhibition of M.tuberculosis at a concentration of 6.25μgml-1. The antimycobacterial activity of these compounds appeared to be specific, since the same compounds were shown to be inactive against other classes of organisms. Compounds which were found to be sufficiently active in any screen were also tested for their toxicity against human mononuclear leucocytes. Polyethylene glycol (PEG) was used as a soluble polymeric support for the synthesis of some fatty acid derivatives, containing an isoxazoline group, which may inhibit mycolic acid synthesis in mycobacteria. Both the PEG-bound products and the cleaved, isolated products themselves were tested against M.fortuitum and some low levels of antimycobacterial activity were observed, which may serve as lead compounds for further studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study evaluated sources of within- and between-subject variability in standard white-on-white (W-W) perimetry and short-wavelength automated perimetry (SWAP). The Influence of staircase strategy on the fatigue effect in W-W perimetry was investigated for a 4 dB single step, single reversal strategy; a variable step size, single reversal dynamic strategy; and the standard 4-2 dB double reversal strategy. The fatigue effect increased as the duration of the examination Increased and was greatest in the second eye for all strategies. The fatigue effect was lowest for the 4dB strategy, which exhibited the shortest examination time and was greatest for the 4-2 dB strategy, which exhibited the longest examination time. Staircase efficiency was lowest for the 4 dB strategy and highest for the dynamic strategy which thus offers a reduced examination time and low inter-subject variability. The normal between-subject variability of SWAP was determined for the standard 4-2 dB double reversal strategy and the 3 dB single reversal FASTPAC strategy and compared to that of W-W perimetry, The decrease in sensitivity with Increase in age was greatest for SWAP. The between-subject variability of SWAP was greater than W-W perimetry. Correction for the Influence of ocular media absorption reduced the between-subject variability of SWAP, The FASTPAC strategy yielded the lowest between-subject variability In SWAP, but the greatest between-subject variability In WoW perimetry. The greater between-subject variability of SWAP has profound Implications for the delineation of visual field abnormality, The fatigue effect for the Full Threshold strategy in SWAP was evaluated with conventional opaque, and translucent occlusion of the fellow eye. SWAP exhibited a greater fatigue effect than W-W perimetry. Translucent occlusion reduced the between-subject variability of W-W perimetry but Increased the between-subject variability of SWAP. The elevation of sensitivity was greater with translucent occlusion which has implications for the statistical analysis of W-W perimetry and SWAP. The influence of age-related cataract extraction and IOL implantation upon the visual field derived by WoW perimetry and SWAP was determined. Cataract yielded a general reduction In sensitivity which was preferentially greater in SWAP, even after the correction of SWAP for the attenuation of the stimulus by the ocular media. There was no correlation between either backward or forward light scatter and the magnitude of the attenuation of W-W or SWAP sensitivity. The post-operative mean deviation in SWAP was positive and has ramifications for the statistical Interpretation of SWAP. Short-wavelength-sensitive pathway isolation was assessed as a function of stimulus eccentricity using the two-colour Increment threshold method. At least 15 dB of SWS pathway Isolation was achieved for 440 nm, 450 nm and 460 nm stimuli at a background luminance of 100 cdm-2, There was a slight decrease In SWS pathway Isolation for all stimulus wavelengths with increasing eccentricity which was not of clinical significance. Adopting a 450 nm stimulus may reduce between-subject variability In SWAP due to a reduction In ocular media absorption and macular pigment absorption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Octopus Automated Perimeter was validated in a comparative study and found to offer many advantages in the assessment of the visual field. The visual evoked potential was investigated in an extensive study using a variety of stimulus parameters to simulate hemianopia and central visual field defects. The scalp topography was recorded topographically and a technique to compute the source derivation of the scalp potential was developed. This enabled clarification of the expected scalp distribution to half field stimulation using different electrode montages. The visual evoked potential following full field stimulation was found to be asymmetrical around the midline with a bias over the left occiput particularly when the foveal polar projections of the occipital cortex were preferentially stimulated. The half field response reflected the distribution asymmetry. Masking of the central 3° resulted in a response which was approximately symmetrical around the midline but there was no evidence of the PNP-complex. A method for visual field quantification was developed based on the neural representation of visual space (Drasdo and Peaston 1982) in an attempt to relate visual field depravation with the resultant visual evoked potentials. There was no form of simple, diffuse summation between the scalp potential and the cortical generators. It was, however, possible to quantify the degree of scalp potential attenuation for M-scaled full field stimuli. The results obtained from patients exhibiting pre-chiasmal lesions suggested that the PNP-complex is not scotomatous in nature but confirmed that it is most likely to be related to specific diseases (Harding and Crews 1982). There was a strong correlation between the percentage information loss of the visual field and the diagnostic value of the visual evoked potential in patients exhibiting chiasmal lesions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automated perimetry has made viable a rapid threshold examination of the visual field and has reinforced the role of perimetry in the diagnostic procedure. The aim of this study was twofold: to isolate the influence of certain extraneous factors on the sensitivity gradient, since these might limit the early detection and accurate monitoring of visual field loss and to investigate the efficacy of certain novel combinations of stimulus parameters in the detection of early visual field loss. The work was carried out with particular reference to glaucoma and to ocular hypertension. The effects of media opacities on the visual field were assessed by forward intraocular light scatter (n= 15) and were found to mask diffuse glaucomatous visual field loss and underestimate focal loss. Correction of the visual field indices for the effects of forward intraocular light scatter (n= 26) showed the focal losses to be, in reality, unaffected. Measurements of back scatter underestimated forward intraocular light scatter (n= 60) and the resultant depression of the visual field. Perimetric sensitivity improved with patient learning (n= 25) and exhibited eccentricity- and depth-dependency effects whereby improvements in sensitivity were greatest for peripheral areas of the field and for those areas which initially demonstrated the lowest sensitivity. The effects of practice were retained over several months (n= 16). Perimetric sensitivity decreased during prolonged examination due to fatigue effects (n&61 19); these demonstrated a similar eccentricity-dependency, being greatest for eccentricities beyond 30o. Mean sensitivities over the range of adaptation levels employed obeyed the Weber-Fechner law (n= 10) and, as would be expected, were independent of pupil size. No relationship was found between short-term fluctuation and adaptation level. Detection of diffuse glaucomatous visual field loss was facilitated using a size III stimulus of duration 200msec at an adaptation level of 31.5asb, compared with a size III stimulus of duration 100msec at an adaptation level of 4asb (n= 20). In a pilot study (n= 10), temporal summation was found to be higher in glaucomatous patients compared with age-matched controls, although the difference was not statistically significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.