924 resultados para Accuracy and precision


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone condition. The objective of this work was to compare the Tropical Rapid Appraisal of Riparian Condition (TRARC) method to a satellite image based approach. TRARC was developed for rapid assessment of the environmental condition of savanna riparian zones. The comparison assessed mapping accuracy, representativeness of TRARC assessment, cost-effectiveness, and suitability for multi-temporal analysis. Two multi-spectral QuickBird images captured in 2004 and 2005 and coincident field data covering sections of the Daly River in the Northern Territory, Australia were used in this work. Both field and image data were processed to map riparian health indicators (RHIs) including percentage canopy cover, organic litter, canopy continuity, stream bank stability, and extent of tree clearing. Spectral vegetation indices, image segmentation and supervised classification were used to produce RHI maps. QuickBird image data were used to examine if the spatial distribution of TRARC transects provided a representative sample of ground based RHI measurements. Results showed that TRARC transects were required to cover at least 3% of the study area to obtain a representative sample. The mapping accuracy and costs of the image based approach were compared to those of the ground based TRARC approach. Results proved that TRARC was more cost-effective at smaller scales (1-100km), while image based assessment becomes more feasible at regional scales (100-1000km). Finally, the ability to use both the image and field based approaches for multi-temporal analysis of RHIs was assessed. Change detection analysis demonstrated that image data can provide detailed information on gradual change, while the TRARC method was only able to identify more gross scale changes. In conclusion, results from both methods were considered to complement each other if used at appropriate spatial scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work of thesis wants to present a dissertation of the wide range of modern dense matching algorithms, which are spreading in different application and research fields, with a particular attention to the innovative “Semi-Global” matching techniques. The choice of develop a semi-global numerical code was justified by the need of getting insight on the variables and strategies that affect the algorithm performances with the primary objective of maximizing the method accuracy and efficiency, and the results level of completeness. The dissertation will consist in the metrological characterization of the proprietary implementation of the semi-global matching algorithm, evaluating the influence of several matching variables and functions implemented in the process and comparing the accuracy and completeness of different results (digital surface models, disparity maps and 2D displacement fields) obtained using our code and other commercial and open-source matching programs in a wide variety of application fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we present a novel method for emulating a stochastic, or random output, computer model and show its application to a complex rabies model. The method is evaluated both in terms of accuracy and computational efficiency on synthetic data and the rabies model. We address the issue of experimental design and provide empirical evidence on the effectiveness of utilizing replicate model evaluations compared to a space-filling design. We employ the Mahalanobis error measure to validate the heteroscedastic Gaussian process based emulator predictions for both the mean and (co)variance. The emulator allows efficient screening to identify important model inputs and better understanding of the complex behaviour of the rabies model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work presented in this thesis is divided into two distinct sections. In the first, the functional neuroimaging technique of Magnetoencephalography (MEG) is described and a new technique is introduced for accurate combination of MEG and MRI co-ordinate systems. In the second part of this thesis, MEG and the analysis technique of SAM are used to investigate responses of the visual system in the context of functional specialisation within the visual cortex. In chapter one, the sources of MEG signals are described, followed by a brief description of the necessary instrumentation for accurate MEG recordings. This chapter is concluded by introducing the forward and inverse problems of MEG, techniques to solve the inverse problem, and a comparison of MEG with other neuroimaging techniques. Chapter two provides an important contribution to the field of research with MEG. Firstly, it is described how MEG and MRI co-ordinate systems are combined for localisation and visualisation of activated brain regions. A previously used co-registration methods is then described, and a new technique is introduced. In a series of experiments, it is demonstrated that using fixed fiducial points provides a considerable improvement in the accuracy and reliability of co-registration. Chapter three introduces the visual system starting from the retina and ending with the higher visual rates. The functions of the magnocellular and the parvocellular pathways are described and it is shown how the parallel visual pathways remain segregated throughout the visual system. The structural and functional organisation of the visual cortex is then described. Chapter four presents strong evidence in favour of the link between conscious experience and synchronised brain activity. The spatiotemporal responses of the visual cortex are measured in response to specific gratings. It is shown that stimuli that induce visual discomfort and visual illusions share their physical properties with those that induce highly synchronised gamma frequency oscillations in the primary visual cortex. Finally chapter five is concerned with localization of colour in the visual cortex. In this first ever use of Synthetic Aperture Magnetometry to investigate colour processing in the visual cortex, it is shown that in response to isoluminant chromatic gratings, the highest magnitude of cortical activity arise from area V2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes the development of a simple and accurate method for estimating the quantity and composition of household waste arisings. The method is based on the fundamental tenet that waste arisings can be predicted from information on the demographic and socio-economic characteristics of households, thus reducing the need for the direct measurement of waste arisings to that necessary for the calibration of a prediction model. The aim of the research is twofold: firstly to investigate the generation of waste arisings at the household level, and secondly to devise a method for supplying information on waste arisings to meet the needs of waste collection and disposal authorities, policy makers at both national and European level and the manufacturers of plant and equipment for waste sorting and treatment. The research was carried out in three phases: theoretical, empirical and analytical. In the theoretical phase specific testable hypotheses were formulated concerning the process of waste generation at the household level. The empirical phase of the research involved an initial questionnaire survey of 1277 households to obtain data on their socio-economic characteristics, and the subsequent sorting of waste arisings from each of the households surveyed. The analytical phase was divided between (a) the testing of the research hypotheses by matching each household's waste against its demographic/socioeconomic characteristics (b) the development of statistical models capable of predicting the waste arisings from an individual household and (c) the development of a practical method for obtaining area-based estimates of waste arisings using readily available data from the national census. The latter method was found to represent a substantial improvement over conventional methods of waste estimation in terms of both accuracy and spatial flexibility. The research therefore represents a substantial contribution both to scientific knowledge of the process of household waste generation, and to the practical management of waste arisings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this research is to improve the planning methodology of Dunlop via an analysis of their annual planning system. This was approached via an investigation of how the plans were developed; extensive interviews, which analysed divisional attitudes and approaches to planning; an analysis of forecast accuracy; and participation in the planning system itself. These investigations revealed certain deficiencies in the operating of the system. In particular, little evidence of formal planning could be found, and some divisions were reacting ex post to the market, rather than planning ex ante. The resulting plans tended to lack resilience and were generally unrealistic, partly because of imposed targets. Similarly, because the links between the elements of the system were often inefficient, previously agreed strategies were not always implemented. The analysis of forecast accuracy in the plans revealed divisions to be poor at most aspects of forecasting. Simple naive models often outperformed divisional forecasts, and much of the error was attributed to systematic, and therefore eliminable factors. These analyses suggested the need for a new system which is proposed in the form of Budgetary Planning. This system involves conceptual changes within the current planning framework. Such changes aim to revise tactical planning in order to meet the needs placed on it by. in particular, strategic planning. Budgetary Planning is an innovation in terms of the current planning literature. It is a total system of annual planning aimed at implementing and controlling the iteratively agreed strategies within the current environment. This is achieved by the generation of tactical alternatives, variable funding and concentration of forecast credibility, all of which aid both the realism and the resilience of planning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fifteen Miscanthus genotypes grown in five locations across Europe were analysed to investigate the influence of genetic and environmental factors on cell wall composition. Chemometric techniques combining near infrared reflectance spectroscopy and conventional chemical analyses were used to construct calibration models for determination of acid detergent lignin, acid detergent fibre, and neutral detergent fibre from sample spectra. The developed equations were shown to predict cell wall components with a good degree of accuracy and significant genetic and environmental variation was identified. The influence of nitrogen and potassium fertiliser on the dry matter yield and cell wall composition of M. x giganteus was investigated. A detrimental affect on feedstock quality was observed to result from application of these inputs which resulted in an overall reduction in concentrations of cell wall components and increased accumulation of ash within the biomass. Pyrolysis-gas chromatography-mass spectrometry and thermo-gravimetric analysis indicates that genotypes other than the commercially cultivated M. x giganteus have potential for use in energy conversion processes and in the bio-refining. The yields and quality parameters of the pyrolysis liquids produced from Miscanthus compared favourably with that produced from SRC willow and produced a more stable pyrolysis liquid with a higher lower heating value. Overall, genotype had a more significant effect on cell wall composition than environment. This indicates good potential for dissection of this trait by QTL analysis and also for plant breeding to produce new genotypes with improved feedstock characteristics for energy conversion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The subject of this research is interaction and language use in an institutional context, the teacher training classroom. Trainer talk is an interactional accomplishment and the research question is: what structures of talk-in-interaction characterise trainer talk in this institutional setting? While there has been research into other kinds of classroom and into other kinds of institutional talk, this study is the first on trainer discourse. The study takes a Conversation Analysis approach to studying institutional interaction and aims to identify the main structures of sequential organization that characterize teacher trainer talk as well as the tasks and identities that are accomplished in it. The research identifies three main interactional contexts in which trainer talk is done: expository, exploratory and experiential. It describes the main characteristics of each and how they relate to each other. Expository sequences are the predominant interactional contexts for trainer talk. But the research findings show that these contexts are flexible and open to the embedding of the other two contexts. All three contexts contribute to the main institutional goal of teaching teachers how to teach. Trainer identity is related to the different sequential contexts. Three main forms of identity in interaction are evidenced in the interactional contexts: the trainer as trainer, the trainer as teacher and the trainer as colleague. Each of them play an important role in teacher trainer pedagogy. The main features of trainer talk as a form of institutional talk are characterised by the following interactional properties: 1. Professional discourse is both the vehicle and object of instruction - the articulation of reflection on experience. 2. There is a reflexive relationship between pedagogy and interaction. 3. The professional discourse that is produced by trainees is not evaluated by trainers but, rather, reformulated to give it relevant precision in terms of accuracy and appropriacy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Developmental dyslexia is associated with deficits in the processing of basic auditory stimuli. Yet it is unclear how these sensory impairments might contribute to poor reading skills. This study better characterizes the relationship between phonological decoding skills, the lack of which is generally accepted to comprise the core deficit in reading disabilities, and auditory sensitivity to amplitude modulation (AM) and frequency modulation (FM). Thirty-eight adult subjects, 17 of whom had a history of developmental dyslexia, completed a battery, of psychophysical measures of sensitivity to FM and AM at different modulation rates, along with a measure of pseudoword reading accuracy and standardized assessments of literacy and cognitive skills. The subjects with a history of dyslexia were significantly less sensitive than controls to 2-Hz FM and 20-Hz AM only. The absence of a significant group difference for 2-Hz AM shows that the dyslexics do not have a general deficit in detecting all slow modulations. Thresholds for detecting 2-Hz and 240-Hz FM and 20-Hz AM correlated significantly with pseudoword reading accuracy. After accounting for various cognitive skills, however, multiple regression analyses showed that detection thresholds for both 2-Hz FM and 20-Hz AM were significant and independent predictors of pseudoword reading ability in the entire sample. Thresholds for 2-Hz AM and 240-Hz FM did not explain significant additional variance in pseudoword reading skill, it is therefore possible that certain components of auditory processing of modulations are related to phonological decoding skills, whereas others are not.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a conceptual model for a firm's capability to calibrate supply chain knowledge (CCK). Knowledge calibration is achieved when there is a match between managers' ex ante confidence in the accuracy of held knowledge and the ex post accuracy of that knowledge. Knowledge calibration is closely related to knowledge utility or willingness to use the available ex ante knowledge: a manager uses the ex ante knowledge if he/she is confident in the accuracy of that knowledge, and does not use it or uses it with reservation, when the confidence is low. Thus, knowledge calibration attained through the firm's CCK enables managers to deal with incomplete and uncertain information and enhances quality of decisions. In the supply chain context, although demand- and supply-related knowledge is available, supply chain inefficiencies, such as the bullwhip effect, remain. These issues may be caused not by a lack of knowledge but by a firm's lack of capability to sense potential disagreement between knowledge accuracy and confidence. Therefore, this paper contributes to the understanding of supply chain knowledge utilization by defining CCK and identifying a set of antecedents and consequences of CCK in the supply chain context.