21 resultados para Cross-entropy method

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Implementation of Enterprise Resource Planning (ERP) systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors (CSFs) in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called “core critical success factors” are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Social streams have proven to be the mostup-to-date and inclusive information on cur-rent events. In this paper we propose a novelprobabilistic modelling framework, called violence detection model (VDM), which enables the identification of text containing violent content and extraction of violence-related topics over social media data. The proposed VDM model does not require any labeled corpora for training, instead, it only needs the in-corporation of word prior knowledge which captures whether a word indicates violence or not. We propose a novel approach of deriving word prior knowledge using the relative entropy measurement of words based on the in-tuition that low entropy words are indicative of semantically coherent topics and therefore more informative, while high entropy words indicates words whose usage is more topical diverse and therefore less informative. Our proposed VDM model has been evaluated on the TREC Microblog 2011 dataset to identify topics related to violence. Experimental results show that deriving word priors using our proposed relative entropy method is more effective than the widely-used information gain method. Moreover, VDM gives higher violence classification results and produces more coherent violence-related topics compared toa few competitive baselines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Standardised packaging (SP) of tobacco products is an innovative tobacco control measure opposed by transnational tobacco companies (TTCs) whose responses to the UK government's public consultation on SP argued that evidence was inadequate to support implementing the measure. The government's initial decision, announced 11 months after the consultation closed, was to wait for 'more evidence', but four months later a second 'independent review' was launched. In view of the centrality of evidence to debates over SP and TTCs' history of denying harms and manufacturing uncertainty about scientific evidence, we analysed their submissions to examine how they used evidence to oppose SP. METHODS AND FINDINGS: We purposively selected and analysed two TTC submissions using a verification-oriented cross-documentary method to ascertain how published studies were used and interpretive analysis with a constructivist grounded theory approach to examine the conceptual significance of TTC critiques. The companies' overall argument was that the SP evidence base was seriously flawed and did not warrant the introduction of SP. However, this argument was underpinned by three complementary techniques that misrepresented the evidence base. First, published studies were repeatedly misquoted, distorting the main messages. Second, 'mimicked scientific critique' was used to undermine evidence; this form of critique insisted on methodological perfection, rejected methodological pluralism, adopted a litigation (not scientific) model, and was not rigorous. Third, TTCs engaged in 'evidential landscaping', promoting a parallel evidence base to deflect attention from SP and excluding company-held evidence relevant to SP. The study's sample was limited to sub-sections of two out of four submissions, but leaked industry documents suggest at least one other company used a similar approach. CONCLUSIONS: The TTCs' claim that SP will not lead to public health benefits is largely without foundation. The tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for highly resourced corporations to slow, weaken, or prevent public health policies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Adherence to treatment is often reported to be low in children with cystic fibrosis. Adherence in cystic fibrosis is an important research area and more research is needed to better understand family barriers to adherence in order for clinicians to provide appropriate intervention. The aim of this study was to evaluate adherence to enzyme supplements, vitamins and chest physiotherapy in children with cystic fibrosis and to determine if any modifiable risk factors are associated with adherence. Methods: A sample of 100 children (≤18 years) with cystic fibrosis (44 male; median [range] 10.1 [0.2-18.6] years) and their parents were recruited to the study from the Northern Ireland Paediatric Cystic Fibrosis Centre. Adherence to enzyme supplements, vitamins and chest physiotherapy was assessed using a multi-method approach including; Medication Adherence Report Scale, pharmacy prescription refill data and general practitioner prescription issue data. Beliefs about treatments were assessed using refined versions of the Beliefs about Medicines Questionnaire-specific. Parental depressive symptoms were assessed using the Center for Epidemiologic Studies Depression Scale. Results: Using the multi-method approach 72% of children were classified as low-adherers to enzyme supplements, 59% low-adherers to vitamins and 49% low-adherers to chest physiotherapy. Variations in adherence were observed between measurement methods, treatments and respondents. Parental necessity beliefs and child age were significant independent predictors of child adherence to enzyme supplements and chest physiotherapy, but parental depressive symptoms were not found to be predictive of adherence. Conclusions: Child age and parental beliefs about treatments should be taken into account by clinicians when addressing adherence at routine clinic appointments. Low adherence is more likely to occur in older children, whereas, better adherence to cystic fibrosis therapies is more likely in children whose parents strongly believe the treatments are necessary. The necessity of treatments should be reinforced regularly to both parents and children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. Elementary demonstrations are then given of the use of maximum likelihood and maximum entropy methods for tuning the model parameters and assisting their interpretation. One of the models can be used to illustrate the significance of overlapping n-tuple samples with respect to correlations in the patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existing method of pipeline health monitoring, which requires an entire pipeline to be inspected periodically, is both time-wasting and expensive. A risk-based model that reduces the amount of time spent on inspection has been presented. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests efficient design and operation philosophy, construction methodology and logical insurance plans. The risk-based model uses Analytic Hierarchy Process (AHP), a multiple attribute decision-making technique, to identify the factors that influence failure on specific segments and analyzes their effects by determining probability of risk factors. The severity of failure is determined through consequence analysis. From this, the effect of a failure caused by each risk factor can be established in terms of cost, and the cumulative effect of failure is determined through probability analysis. The technique does not totally eliminate subjectivity, but it is an improvement over the existing inspection method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cross-sectional study aims to describe the overall picture of a phenomenon, a situational problem, an attitude or an issue, by asking a cross-section of a given population at one specified moment in time. This paper describes the key features of the cross-sectional survey method. It begins by highlighting the main principles of the method, then discusses stages in the research process, drawing on two surveys of primary care pharmacists to illustrate some salient points about planning, sampling frames, definition and conceptual issues, research instrument design and response rates. Four constraints in prescribing studies were noted. First the newness of the subject meant a low basis of existing knowledge to design a questionnaire. Second, there was no public existing database for the sampling frame, so a pragmatic sampling exercise was used. Third, the definition of a Primary Care Pharmacist (PCP) [in full] and respondents recognition of that name and identification with the new role limited the response. Fourth, a growing problem for all surveys, but particularly with pharmacists and general practitioners (GP) [in full] is the growing danger of survey fatigue, which has a negative impact on response levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Isocyanate cross-linked hydroxy terminated polybutadiene is used as a binder for solid rocket propellant. Rocket motors containing this propellant require a storage life of at least 20 years. During storage it has been found that the important rubbery properties of the binder can be lost due to oxidative cross-linking of the polybutadiene chains. This could cause catastrophic failure when the rocket motor is required. At present the bis-hindered phenol Calco 2246 is used as a thermal oxidative stabiliser, but it's performance is only adequate. This has led to the search for a more efficient stabiliser system. To hasten the evaluation of new antioxidant systems the use of dynamic thermal analysis was investigated. Results showed that a tentative relationship existed between predictions by thermal analysis and the long term oven ageing for simple single antioxidant systems. But for more complex systems containing either autosynergistic or mixed antioxidants no relationship was observed suggesting that results for such an "accelerated" technique cannot be used for the purpose of extrapolation for long term performance. This was attributed to the short time and more aggressive condition used (hjgher temperature and oxygen rich atmosphere in thermal analysis) altering the mechanism of action of the antioxidants and not allowing time for co-operative effect of the combined antioxidant system to form. One potential problem for the binder system is the use of an diisocyanate as a cross-linking agent. This reacts with the hydroxyl hydrogen on the polymer as well as other active hydrogens such as those contained in a number of antioxidants, affecting both cross-linking and antioxidant effectiveness. Studies in this work showed that only antioxidants containing amine moieties have a significant affect on binder preparation, with the phenolic antioxidants not reacting. This is due to the greater nucleophilicity of the amines. Investigation of a range of antioxidant systems, including potentially homo, hetero and autosynergistic systems, has highlighted a number of systems which show considerably greater effectiveness than the currently used antioxidant Calco 2246. The only single antioxidant which showed improvement was the partially unhindered phenol y-Tocopherol. Of the mixed systems combinations of the sulphur containing antioxidants e.g. DLTP with higher levels of chain-breaking antioxidants, especially Calco 2246, were the most promising. Also the homosynergistic mix of an aromatic amine and a phenol was seen to be very effective but the results were inconsistent. This inconsistency could be explained by the method of sample preparation used. It was shown that the efficiency of a number of antioxidant.s could be dramatically improved by the use of ultrasound during the mixing stage of preparation. The reason for this increase in performance is unclear but in the case of the homosynergistic amine/phenol mix both more efficient mixing and/or the production of a novel mechanism of action are suggested

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been much recent research into extracting useful diagnostic features from the electrocardiogram with numerous studies claiming impressive results. However, the robustness and consistency of the methods employed in these studies is rarely, if ever, mentioned. Hence, we propose two new methods; a biologically motivated time series derived from consecutive P-wave durations, and a mathematically motivated regularity measure. We investigate the robustness of these two methods when compared with current corresponding methods. We find that the new time series performs admirably as a compliment to the current method and the new regularity measure consistently outperforms the current measure in numerous tests on real and synthetic data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existing method of pipeline health monitoring, which requires an entire pipeline to be inspected periodically, is both time-wasting and expensive. A risk-based model that reduces the amount of time spent on inspection has been presented. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests an efficient design and operation philosophy, construction methodology, and logical insurance plans. The risk-based model uses the analytic hierarchy process (AHP), a multiple-attribute decision-making technique, to identify the factors that influence failure on specific segments and to analyze their effects by determining probability of risk factors. The severity of failure is determined through consequence analysis. From this, the effect of a failure caused by each risk factor can be established in terms of cost, and the cumulative effect of failure is determined through probability analysis. The technique does not totally eliminate subjectivity, but it is an improvement over the existing inspection method.