991 resultados para Subset Sum Problem


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper uses a comparative perspective to analyze how multiracial congregations may contribute to racial reconciliation in South Africa. Drawing on the large-scale study of multiracial congregations in the USA by Emerson et al., it examines how they help transform antagonistic identities and make religious contributions to wider reconciliation processes. It compares the American research to an ethnographic study of a congregation in Cape Town, identifying cross-national patterns and South African distinctives, such as discourses about restitution, AIDS, inequality and women. The extent that multiracial congregations can contribute to reconciliation in South Africa is linked to the content of their worship and discourses, but especially to their ability to dismantle racially aligned power structures. © Koninklijke Brill NV, 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an efficient learning mechanism to build fuzzy rule-based systems through the construction of sparse least-squares support vector machines (LS-SVMs). In addition to the significantly reduced computational complexity in model training, the resultant LS-SVM-based fuzzy system is sparser while offers satisfactory generalization capability over unseen data. It is well known that the LS-SVMs have their computational advantage over conventional SVMs in the model training process; however, the model sparseness is lost, which is the main drawback of LS-SVMs. This is an open problem for the LS-SVMs. To tackle the nonsparseness issue, a new regression alternative to the Lagrangian solution for the LS-SVM is first presented. A novel efficient learning mechanism is then proposed in this paper to extract a sparse set of support vectors for generating fuzzy IF-THEN rules. This novel mechanism works in a stepwise subset selection manner, including a forward expansion phase and a backward exclusion phase in each selection step. The implementation of the algorithm is computationally very efficient due to the introduction of a few key techniques to avoid the matrix inverse operations to accelerate the training process. The computational efficiency is also confirmed by detailed computational complexity analysis. As a result, the proposed approach is not only able to achieve the sparseness of the resultant LS-SVM-based fuzzy systems but significantly reduces the amount of computational effort in model training as well. Three experimental examples are presented to demonstrate the effectiveness and efficiency of the proposed learning mechanism and the sparseness of the obtained LS-SVM-based fuzzy systems, in comparison with other SVM-based learning techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Necessary and sufficient conditions for choice functions to be rational have been intensively studied in the past. However, in these attempts, a choice function is completely specified. That is, given any subset of options, called an issue, the best option over that issue is always known, whilst in real-world scenarios, it is very often that only a few choices are known instead of all. In this paper, we study partial choice functions and investigate necessary and sufficient rationality conditions for situations where only a few choices are known. We prove that our necessary and sufficient condition for partial choice functions boils down to the necessary and sufficient conditions for complete choice functions proposed in the literature. Choice functions have been instrumental in belief revision theory. That is, in most approaches to belief revision, the problem studied can simply be described as the choice of possible worlds compatible with the input information, given an agent’s prior belief state. The main effort has been to devise strategies in order to infer the agents revised belief state. Our study considers the converse problem: given a collection of input information items and their corresponding revision results (as provided by an agent), does there exist a rational revision operation used by the agent and a consistent belief state that may explain the observed results?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blood culture contamination (BCC) has been associated with unnecessary antibiotic use, additional laboratory tests and increased length of hospital stay thus incurring significant extra hospital costs. We set out to assess the impact of a staff educational intervention programme on decreasing intensive care unit (ICU) BCC rates to <3% (American Society for Microbiology standard). BCC rates during the pre-intervention period (January 2006-May 2011) were compared with the intervention period (June 2011-December 2012) using run chart and regression analysis. Monthly ICU BCC rates during the intervention period were reduced to a mean of 3·7%, compared to 9·5% during the baseline period (P < 0·001) with an estimated potential annual cost savings of about £250 100. The approach used was simple in design, flexible in delivery and efficient in outcomes, and may encourage its translation into clinical practice in different healthcare settings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Communication skills can be trained alongside clinical reasoning, history taking or clinical examination skills. This is advocated as a solution to the low transfer of communication skills. Still, students have to integrate the knowledge/skills acquired during different curriculum parts in patient consultations at some point. How do medical students experience these integrated consultations within a simulated environment and in real practice when dealing with responsibility?

Methods: Six focus groups were conducted with (pre-)/clerkship students.

Results: Students were motivated to practice integrated consultations with simulated patients and felt like 'real physicians'. However, their focus on medical problem solving drew attention away from improving their communication skills. Responsibility for real patients triggered students' identity development. This identity formation guided the development of an own consultation style, a process that was hampered by conflicting demands of role models.

Conclusion: Practicing complete consultations results in the dilemma of prioritizing medical problem solving above attention for patient communication. Integrated consultation training advances this dilemma to the pre-clerkship period. During clerkships this dilemma is heightened because real patients trigger empathy and responsibility, which invites students to define their role as doctor.

Practice Implications: When training integrated consultations, educators should pay attention to students' learning priorities and support the development of students' professional identity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A periodic monitoring of the pavement condition facilitates a cost-effective distribution of the resources available for maintenance of the road infrastructure network. The task can be accurately carried out using profilometers, but such an approach is generally expensive. This paper presents a method to collect information on the road profile via accelerometers mounted in a fleet of non-specialist vehicles, such as police cars, that are in use for other purposes. It proposes an optimisation algorithm, based on Cross Entropy theory, to predict road irregularities. The Cross Entropy algorithm estimates the height of the road irregularities from vehicle accelerations at each point in time. To test the algorithm, the crossing of a half-car roll model is simulated over a range of road profiles to obtain accelerations of the vehicle sprung and unsprung masses. Then, the simulated vehicle accelerations are used as input in an iterative procedure that searches for the best solution to the inverse problem of finding road irregularities. In each iteration, a sample of road profiles is generated and an objective function defined as the sum of squares of differences between the ‘measured’ and predicted accelerations is minimized until convergence is reached. The reconstructed profile is classified according to ISO and IRI recommendations and compared to its original class. Results demonstrate that the approach is feasible and that a good estimate of the short-wavelength features of the road profile can be detected, despite the variability between the vehicles used to collect the data.