837 resultados para semi binary based feature detectordescriptor
Resumo:
An elastomeric, supramolecular healable polymer blend, comprising a chain-folding polyimide and a telechelic polyurethane with pyrenyl endgroups, is compatibilised by aromatic π−π stacking between the π-electron-deficient diimide groups and the π-electron-rich pyrenyl units. This inter-polymer interaction is key to forming a tough, healable, elastomeric material. Variable temperature FTIR analysis of the bulk material also conclusively demonstrates the presence of hydrogen bonding, which complements the π–π stacking interactions. Variable temperature SAXS analysis shows that the healable polymeric blend has a nanophase-separated morphology, and that the X-ray contrast between the two types of domain increases with increasing temperature, a feature that is repeatable over several heating and cooling cycles. A fractured sample of this material reproducibly regains more than 95% of the tensile modulus, 91% of the elongation to break, and 77% of the modulus of toughness of the pristine material.
Resumo:
A case study of atmospheric aerosol measurements exploring the impact of the vertical distribution of aerosol chemical composition upon the radiative budget in North-Western Europe is presented. Sub-micron aerosol chemical composition was measured by an Aerodyne Aerosol Mass Spectrometer (AMS) on both an airborne platform and a ground-based site at Cabauw in the Netherlands. The examined period in May 2008 was characterised by enhanced pollution loadings in North-Western Europe and was dominated by ammonium nitrate and Organic Matter (OM). Both ammonium nitrate and OM were observed to increase with altitude in the atmospheric boundary layer. This is primarily attributed to partitioning of semi-volatile gas phase species to the particle phase at reduced temperature and enhanced relative humidity. Increased ammonium nitrate concentrations in particular were found to strongly increase the ambient scattering potential of the aerosol burden, which was a consequence of the large amount of associated water as well as the enhanced mass. During particularly polluted conditions, increases in aerosol optical depth of 50–100% were estimated to occur due to the observed increase in secondary aerosol mass and associated water uptake. Furthermore, the single scattering albedo was also shown to increase with height in the boundary layer. These enhancements combined to increase the negative direct aerosol radiative forcing by close to a factor of two at the median percentile level. Such increases have major ramifications for regional climate predictions as semi-volatile components are often not included in aerosol models. The results presented here provide an ideal opportunity to test regional and global representations of both the aerosol vertical distribution and subsequent impacts in North-Western Europe. North-Western Europe can be viewed as an analogue for the possible future air quality over other polluted regions of the Northern Hemisphere, where substantial reductions in sulphur dioxide emissions have yet to occur. Anticipated reductions in sulphur dioxide in polluted regions will result in an increase in the availability of ammonia to form ammonium nitrate as opposed to ammonium sulphate. This will be most important where intensive agricultural practises occur. Our observations over North-Western Europe, a region where sulphur dioxide emissions have already been reduced, indicate that failure to include the semi-volatile behaviour of ammonium nitrate will result in significant errors in predicted aerosol direct radiative forcing. Such errors will be particularly significant on regional scales.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on bivariate observations of undesirable events and signs of therapeutic benefit. The methods generalize earlier approaches taking into account only the undesirable outcomes. Logistic regression models are used to model the two responses, which are both assumed to take a binary form. A prior distribution for the unknown model parameters is suggested and an optional safety constraint can be included. Gain functions to be maximized are formulated in terms of accurate estimation of the limits of a therapeutic window or optimal treatment of the next cohort of subjects, although the approach could be applied to achieve any of a wide variety of objectives. The designs introduced are illustrated through simulation and retrospective implementation to a completed dose-escalation study. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on binary measures of undesirable events and continuous measures of therapeutic benefit. The methods generalize earlier approaches where undesirable events and therapeutic benefit are both binary. A logistic regression model is used to model the binary responses, while a linear regression model is used to model the continuous responses. Prior distributions for the unknown model parameters are suggested. A gain function is discussed and an optional safety constraint is included. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Objectives: To assess the short- and long-term reproducibility of a short food group questionnaire, and to compare its performance for estimating nutrient intakes in comparison with a 7-day diet diary. Design: Participants for the reproducibility study completed the food group questionnaire at two time points, up to 2 years apart. Participants for the performance study completed both the food group questionnaire and a 7-day diet diary a few months apart. Reproducibility was assessed by kappa statistics and percentage change between the two questionnaires; performance was assessed by kappa statistics, rank correlations and percentages of participants classified into the same and opposite thirds of intake. Setting: A random sample of participants in the Million Women Study, a population-based prospective study in the UK. Subjects: In total, 12 221 women aged 50-64 years. Results: in the reproducibility study, 75% of the food group items showed at least moderate agreement for all four time-point comparisons. Items showing fair agreement or worse tended to be those where few respondents reported eating them more than once a week, those consumed in small amounts and those relating to types of fat consumed. Compared with the diet diary, the food group questionnaire showed consistently reasonable performance for the nutrients carbohydrate, saturated fat, cholesterol, total sugars, alcohol, fibre, calcium, riboflavin, folate and vitamin C. Conclusions: The short food group questionnaire used in this study has been shown to be reproducible over time and to perform reasonably well for the assessment of a number of dietary nutrients.
Resumo:
The terpenoid chiral selectors dehydroabietic acid, 12,14-dinitrodehydroabietic acid and friedelin have been covalently linked to silica gel yielding three chiral stationary phases CSP 1, CSP 2 and CSP 3, respectively. The enantiodiscriminating capability of each one of these phases was evaluated by HPLC with four families of chiral aromatic compounds composed of alcohols, amines, phenylalanine and tryptophan amino acid derivatives and beta-lactams. The CSP 3 phase, containing a selector with a large friedelane backbone is particularly suitable for resolving free alcohols and their derivatives bearing fluorine substituents, while CSP 2 with a dehydroabietic architecture is the only phase that efficiently discriminates 1, 1'-binaphthol atropisomers. CSP 3 also gives efficient resolution of the free amines. All three phases resolve well the racemates of N-trifluoracetyl and N-3,5-dinitrobenzoyl phenylalanine amino acid ester derivatives. Good enantioseparation of beta-lactams and N-benzoyl tryptophan amino acid derivatives was achieved on CSP 1. In order to understand the structural factors that govern the chiral molecular recognition ability of these phases, molecular dynamics simulations were carried out in the gas phase with binary diastereomeric complexes formed by the selectors of CSP 1 and CSP 2 and several amino acid derivatives. Decomposition of molecular mechanics energies shows that van der Waals interactions dominate the formation of the diastereomeric transient complexes while the electrostatic binding interactions are primarily responsible for the enantioselective binding of the (R)- and (S)-analytes. Analysis of the hydrogen bonds shows that electrostatic interactions are mainly associated with the formation of N-(HO)-O-...=C enantio selective hydrogen bonds between the amide binding sites from the selectors and the carbonyl groups of the analytes. The role of mobile phase polarity, a mixture of n-hexane and propan-2-ol in different ratios, was also evaluated through molecular dynamics simulations in explicit solvent. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
Garment information tracking is required for clean room garment management. In this paper, we present a camera-based robust system with implementation of Optical Character Reconition (OCR) techniques to fulfill garment label recognition. In the system, a camera is used for image capturing; an adaptive thresholding algorithm is employed to generate binary images; Connected Component Labelling (CCL) is then adopted for object detection in the binary image as a part of finding the ROI (Region of Interest); Artificial Neural Networks (ANNs) with the BP (Back Propagation) learning algorithm are used for digit recognition; and finally the system is verified by a system database. The system has been tested. The results show that it is capable of coping with variance of lighting, digit twisting, background complexity, and font orientations. The system performance with association to the digit recognition rate has met the design requirement. It has achieved real-time and error-free garment information tracking during the testing.
Resumo:
In this paper, we present a feature selection approach based on Gabor wavelet feature and boosting for face verification. By convolution with a group of Gabor wavelets, the original images are transformed into vectors of Gabor wavelet features. Then for individual person, a small set of significant features are selected by the boosting algorithm from a large set of Gabor wavelet features. The experiment results have shown that the approach successfully selects meaningful and explainable features for face verification. The experiments also suggest that for the common characteristics such as eyes, noses, mouths may not be as important as some unique characteristic when training set is small. When training set is large, the unique characteristics and the common characteristics are both important.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
User interfaces have the primary role of enabling access to information meeting individual users' needs. However, the user-systems interaction is still rigid, especially in support of complex environments where various types of users are involved. Among the approaches for improving user interface agility, we present a normative approach to the design interfaces of web applications, which allow delivering users personalized services according to parameters extracted from the simulation of norms in the social context. A case study in an e-Government context is used to illustrate the implications of the approach.
Resumo:
Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.
Resumo:
Boolean input systems are in common used in the electric industry. Power supplies include such systems and the power converter represents these. For instance, in power electronics, the control variable are the switching ON and OFF of components as thyristors or transistors. The purpose of this paper is to use neural network (NN) to control continuous systems with Boolean inputs. This method is based on classification of system variations associated with input configurations. The classical supervised backpropagation algorithm is used to train the networks. The training of the artificial neural network and the control of Boolean input systems are presented. The design procedure of control systems is implemented on a nonlinear system. We apply those results to control an electrical system composed of an induction machine and its power converter.
Resumo:
There is growing interest, especially for trials in stroke, in combining multiple endpoints in a single clinical evaluation of an experimental treatment. The endpoints might be repeated evaluations of the same characteristic or alternative measures of progress on different scales. Often they will be binary or ordinal, and those are the cases studied here. In this paper we take a direct approach to combining the univariate score statistics for comparing treatments with respect to each endpoint. The correlations between the score statistics are derived and used to allow a valid combined score test to be applied. A sample size formula is deduced and application in sequential designs is discussed. The method is compared with an alternative approach based on generalized estimating equations in an illustrative analysis and replicated simulations, and the advantages and disadvantages of the two approaches are discussed.