63 resultados para Class-based isolation vs. sharing

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

By the turn of the twenty-first century, UNDP had embraced a new form of funding based on ‘cost-sharing’, with this source accounting for 51 per cent of the organisation’s total expenditure worldwide in 2000. Unlike the traditional donor - recipient relationship so common with development projects, the new cost-sharing modality has created a situation whereby UNDP local offices become ‘subcontractors’ and agencies of the recipient countries become ‘clients’. This paper explores this transition in the context of Brazil, focusing on how the new modality may have compromised UNDP’s ability to promote Sustainable Human Development, as established in its mandate. The great enthusiasm for this modality within the UN system and its potential application to other developing countries increase the importance of a systematic assessment of its impact and developmental consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study helps develop an overall understanding as to why some students achieve where others don't. Debate on the effects of class on educational attainment is well documented and typically centres on the reproductive nature of class whilst studies of the effect of class on educational aspirations also predict outcomes that see education reinforcing and reproducing a student's class background.Despite a number of government initiatives to help raise higher education participation to 50 per cent by 2010, for the working class numbers have altered little. Using data from an ethnographic case study of a low-achieving girls school, the author explores aspirations and argues that whilst class is very powerful in explaining educational attainment, understanding educational aspirations is somewhat more complex. The purpose of this book, therefore, is to question and challenge popular assumptions surrounding class-based theory in making sense of girls' aspirations and to question the usefulness of the continued over reliance of such broad categorisations by both academics and policy makers

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An improved method for the detection of pressed hazelnut oil in admixtures with virgin olive oil by analysis of polar components is described. The method. which is based on the SPE-based isolation of the polar fraction followed by RP-HPLC analysis with UV detection. is able to detect virgin olive oil adulterated with pressed hazelnut oil at levels as low as 5% with accuracy (90.0 +/- 4.2% recovery of internal standard), good reproducibility (4.7% RSD) and linearity (R-2: 0.9982 over the 5-40% adulteration range). An international ring-test of the developed method highlighted its capability as 80% of the samples were, on average, correctly identified despite the fact that no training samples were provided to the participating laboratories. However, the large variability in marker components among the pressed hazelnut oils examined prevents the use of the method for quantification of the level of adulteration. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of economic incentives for biodiversity (mostly Compensation and Reward for Environmental Services including Payment for ES) has been widely supported in the past decades and became the main innovative policy tools for biodiversity conservation worldwide. These policy tools are often based on the insight that rational actors perfectly weigh the costs and benefits of adopting certain behaviors and well-crafted economic incentives and disincentives will lead to socially desirable development scenarios. This rationalist mode of thought has provided interesting insights and results, but it also misestimates the context by which ‘real individuals’ come to decisions, and the multitude of factors influencing development sequences. In this study, our goal is to examine how these policies can take advantage of some unintended behavioral reactions that might in return impact, either positively or negatively, general policy performances. We test the effect of income's origin (‘Low effort’ based money vs. ‘High effort’ based money) on spending decisions (Necessity vs. Superior goods) and subsequent pro social preferences (Future pro-environmental behavior) within Madagascar rural areas, using a natural field experiment. Our results show that money obtained under low effort leads to different consumption patterns than money obtained under high efforts: superior goods are more salient in the case of low effort money. In parallel, money obtained under low effort leads to subsequent higher pro social behavior. Compensation and rewards policies for ecosystem services may mobilize knowledge on behavioral biases to improve their design and foster positive spillovers on their development goals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This contribution proposes a powerful technique for two-class imbalanced classification problems by combining the synthetic minority over-sampling technique (SMOTE) and the particle swarm optimisation (PSO) aided radial basis function (RBF) classifier. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier's structure and the parameters of RBF kernels are determined using a PSO algorithm based on the criterion of minimising the leave-one-out misclassification rate. The experimental results obtained on a simulated imbalanced data set and three real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article aims to create intellectual space in which issues of social inequality and education can be analyzed and discussed in relation to the multifaceted and multi-levelled complexities of the modern world. It is divided into three sections. Section One locates the concept of social class in the context of the modern nation state during the period after the Second World War. Focusing particularly on the impact of ‘Fordism’ on social organization and cultural relations, it revisits the articulation of social justice issues in the United Kingdom, and the structures put into place at the time to alleviate educational and social inequalities. Section Two problematizes the traditional concept of social class in relation to economic, technological and sociocultural changes that have taken place around the world since the mid-1980s. In particular, it charts some of the changes to the international labour market and global patterns of consumption, and their collective impact on the re-constitution of class boundaries in ‘developed countries’. This is juxtaposed with some of the major social effects of neo-classical economic policies in recent years on the sociocultural base in developing countries. It discusses some of the ways these inequalities are reflected in education. Section Three explores tensions between the educational ideals of the ‘knowledge economy’ and the discursive range of social inequalities that are emerging within and beyond the nation state. Drawing on key motifs identified throughout, the article concludes with a reassessment of the concept of social class within the global cultural economy. This is discussed in relation to some of the major equity and human rights issues in education today.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In e-health intervention studies, there are concerns about the reliability of internet-based, self-reported (SR) data and about the potential for identity fraud. This study introduced and tested a novel procedure for assessing the validity of internet-based, SR identity and validated anthropometric and demographic data via measurements performed face-to-face in a validation study (VS). Participants (n = 140) from seven European countries, participating in the Food4Me intervention study which aimed to test the efficacy of personalised nutrition approaches delivered via the internet, were invited to take part in the VS. Participants visited a research centre in each country within 2 weeks of providing SR data via the internet. Participants received detailed instructions on how to perform each measurement. Individual’s identity was checked visually and by repeated collection and analysis of buccal cell DNA for 33 genetic variants. Validation of identity using genomic information showed perfect concordance between SR and VS. Similar results were found for demographic data (age and sex verification). We observed strong intra-class correlation coefficients between SR and VS for anthropometric data (height 0.990, weight 0.994 and BMI 0.983). However, internet-based SR weight was under-reported (Δ −0.70 kg [−3.6 to 2.1], p < 0.0001) and, therefore, BMI was lower for SR data (Δ −0.29 kg m−2 [−1.5 to 1.0], p < 0.0001). BMI classification was correct in 93 % of cases. We demonstrate the utility of genotype information for detection of possible identity fraud in e-health studies and confirm the reliability of internet-based, SR anthropometric and demographic data collected in the Food4Me study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To introduce a new approach to problem based learning (PBL) used in the context of medicinal chemistry practical class teaching pharmacy students. Design: The described chemistry practical is based on independent studies by small groups of undergraduate students (4-5), who design their own practical work taking relevant professional standards into account. Students are carefully guided by feedback and acquire a set of skills important to their future profession as healthcare professionals. This model has been tailored to the application of PBL in a chemistry practical class setting for a large student cohort (150 students). Assessment: The achievement of learning outcomes is based on the submission of relevant documentation including a certificate of analysis, in addition to peer assessment. Some of the learning outcomes are also assessed in the final written examination at the end of the academic year. Conclusion: The described design of a novel PBL chemistry laboratory course for pharmacy students has been found to be successful. Self-reflective learning and engagement with feedback were encouraged, and students enjoyed the challenging learning experience. Skills that are highly essential for the students’ future careers as healthcare professionals are promoted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2