142 resultados para Meta heuristics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of unions on profits continues to be an unresolved theoretical and empirical issue. In this paper, clustered data analysis and hierarchical linear meta-regression models are applied to the population of forty-five econometric studies that report 532 estimates of the direct effect of unions on profits. Unions have a significant negative effect on profits in the United States, and this effect is larger when market-based measures of profits are used. Separate meta-regression analyses are used to identify the effects of market power and long-lived assets on profits, as well as the sources of union-profit effects. The accumulated evidence rejects market power as a source of union-profit effects. While the case is not yet proven, there is some evidence in support of the appropriation of quasi-rent hypothesis. There is a clear need for further American and non-American primary research in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Fall risk screening tools are frequently used as a part of falls prevention programs in hospitals. Design-related bias in evaluations of tool predictive accuracy could lead to overoptimistic results, which would then contribute to program failure in practice.

Methods:
A systematic review was undertaken. Two blind reviewers assessed the methodology of relevant publications into a four-point classification system adapted from multiple sources. The association between study design classification and reported results was examined using linear regression with clustering based on screening tool and robust variance estimates with point estimates of Youden Index (= sensitivity + specificity - 1) as the dependent variable. Meta-analysis was then performed pooling data from prospective studies.

Results: Thirty-five publications met inclusion criteria, containing 51 evaluations of fall risk screening tools. Twenty evaluations were classified as retrospective validation evaluations, 11 as prospective (temporal) validation evaluations, and 20 as prospective (external) validation evaluations. Retrospective evaluations had significantly higher Youden Indices (point estimate [95% confidence interval]: 0.22 [0.11, 0.33]). Pooled Youden Indices from prospective evaluations demonstrated the STRATIFY, Morse Falls Scale, and nursing staff clinical judgment to have comparable accuracy.

Discussion: Practitioners should exercise caution in comparing validity of fall risk assessment tools where the evaluation has been limited to retrospective classifications of methodology. Heterogeneity between studies indicates that the Morse Falls Scale and STRATIFY may still be useful in particular settings, but that widespread adoption of either is unlikely to generate benefits significantly greater than that of nursing staff clinical judgment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a previous column, Fineout-Overholt et al. (2008) discussed the use of systematic reviews in decision making for clinical practice, focusing primarily on the quantitative studies, such as randomised controlled trials. Narrative reviews were included but in less detail; this was intentional because the synthesis of qualitative evidence is a complex process and has evoked significant discussion over the past 5 years. Consequently, this column addresses some of the process issues surrounding qualitative evidence synthesis, or meta-synthesis as it is more commonly known, and offers ideas for how evidence arising from these can be used to inform education, teaching, and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I begin my third reply by answering some of the criticisms raised by Tierno against theodical attempts to account for the pervasiveness of moral evil. I then take the discussion to a meta-philosophical level, where I question the very way of thinking about God and evil implicit in Tierno’s critique and in much contemporary philosophy of religion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How is the philosophical study of religion best pursued? Responses to this meta-philosophical question tend to recapitulate the analytic-Continental divide in philosophy in general. My aim is to examine the nature of this divide, particularly as it has manifested itself in the philosophy of religion. I begin with a comparison of the stylistic differences in the language of the two traditions, taking the work of Alvin Plantinga and John Caputo as exemplars of the analytic and Continental schools respectively. In order to account for these stylistic divergences, however, it is necessary to delve further into meta-philosophy. I go on to show how each philosophical school models itself on different theoretical practices, the analytic school mimicking the scientific style of inquiry, while in Continental philosophy it is the arts and humanities rather than the sciences that provide the model for philosophical discourse. By situating themselves in such different genres,  analytic and Continental philosophers have developed contrasting, if not mutually exclusive, methods for pursuing the philosophy of religion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Meta-regression analysis (MRA) provides an empirical framework through which to integrate disparate economics research results, filter out likely publication selection bias, and explain their wide variation using socio-economic and econometric explanatory variables. In dozens of applications, MRA has found excess variation among reported research findings, some of which is explained by socio-economic variables (e.g., researchers’ gender). MRA can empirically model and test socio-economic theories about economics research. Here, we make two strong claims: socio-economic MRAs, broadly conceived, explain much of the excess variation routinely found in empirical economics research; whereas, any other type of literature review (or summary) is biased.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The overarching goal of this dissertation was to evaluate the contextual components of instructional strategies for the acquisition of complex programming concepts. A meta-knowledge processing model is proposed, on the basis of the research findings, thereby facilitating the selection of media treatment for electronic courseware. When implemented, this model extends the work of Smith (1998), as a front-end methodology, for his glass-box interpreter called Bradman, for teaching novice programmers. Technology now provides the means to produce individualized instructional packages with relative ease. Multimedia and Web courseware development accentuate a highly graphical (or visual) approach to instructional formats. Typically, little consideration is given to the effectiveness of screen-based visual stimuli, and curiously, students are expected to be visually literate, despite the complexity of human-computer interaction. Visual literacy is much harder for some people to acquire than for others! (see Chapter Four: Conditions-of-the-Learner) An innovative research programme was devised to investigate the interactive effect of instructional strategies, enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style, on the acquisition of a special category of abstract (process) programming concept. This type of concept was chosen to focus on the role of analogic knowledge involved in computer programming. The results are discussed within the context of the internal/external exchange process, drawing on Ritchey's (1980) concepts of within-item and between-item encoding elaborations. The methodology developed for the doctoral project integrates earlier research knowledge in a novel, interdisciplinary, conceptual framework, including: from instructional science in the USA, for the concept learning models; British cognitive psychology and human memory research, for defining the cognitive style construct; and Australian educational research, to provide the measurement tools for instructional outcomes. The experimental design consisted of a screening test to determine cognitive style, a pretest to determine prior domain knowledge in abstract programming knowledge elements, the instruction period, and a post-test to measure improved performance. This research design provides a three-level discovery process to articulate: 1) the fusion of strategic knowledge required by the novice learner for dealing with contexts within instructional strategies 2) acquisition of knowledge using measurable instructional outcome and learner characteristics 3) knowledge of the innate environmental factors which influence the instructional outcomes This research has successfully identified the interactive effect of instructional strategy, within an individual's cognitive style construct, in their acquisition of complex programming concepts. However, the significance of the three-level discovery process lies in the scope of the methodology to inform the design of a meta-knowledge processing model for instructional science. Firstly, the British cognitive style testing procedure, is a low cost, user friendly, computer application that effectively measures an individual's position on the two cognitive style continua (Riding & Cheema,1991). Secondly, the QUEST Interactive Test Analysis System (Izard,1995), allows for a probabilistic determination of an individual's knowledge level, relative to other participants, and relative to test-item difficulties. Test-items can be related to skill levels, and consequently, can be used by instructional scientists to measure knowledge acquisition. Finally, an Effect Size Analysis (Cohen,1977) allows for a direct comparison between treatment groups, giving a statistical measurement of how large an effect the independent variables have on the dependent outcomes. Combined with QUEST's hierarchical positioning of participants, this tool can assist in identifying preferred learning conditions for the evaluation of treatment groups. By combining these three assessment analysis tools into instructional research, a computerized learning shell, customised for individuals' cognitive constructs can be created (McKay & Garner,1999). While this approach has widespread application, individual researchers/trainers would nonetheless, need to validate with an extensive pilot study programme (McKay,1999a; McKay,1999b), the interactive effects within their specific learning domain. Furthermore, the instructional material does not need to be limited to a textual/graphical comparison, but could be applied to any two or more instructional treatments of any kind. For instance: a structured versus exploratory strategy. The possibilities and combinations are believed to be endless, provided the focus is maintained on linking of the front-end identification of cognitive style with an improved performance outcome. My in-depth analysis provides a better understanding of the interactive effects of the cognitive style construct and instructional format on the acquisition of abstract concepts, involving spatial relations and logical reasoning. In providing the basis for a meta-knowledge processing model, this research is expected to be of interest to educators, cognitive psychologists, communications engineers and computer scientists specialising in computer-human interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel agent-driven heuristic approach was developed to control the operational scheduling for a local manufacturer. This approach outperformed the traditional kanban control mechanism under numerous simulated benchmarking tests. Using this approach, the individual machine loading was reduced by, on average, 28%, with the loading spread reduced by 85%

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background
Efforts to prevent the development of overweight and obesity have increasingly focused early in the life course as we recognise that both metabolic and behavioural patterns are often established within the first few years of life. Randomised controlled trials (RCTs) of interventions are even more powerful when, with forethought, they are synthesised into an individual patient data (IPD) prospective meta-analysis (PMA). An IPD PMA is a unique research design where several trials are identified for inclusion in an analysis before any of the individual trial results become known and the data are provided for each randomised patient. This methodology minimises the publication and selection bias often associated with a retrospective meta-analysis by allowing hypotheses, analysis methods and selection criteria to be specified a priori.

Methods/Design
The Early Prevention of Obesity in CHildren (EPOCH) Collaboration was formed in 2009. The main objective of the EPOCH Collaboration is to determine if early intervention for childhood obesity impacts on body mass index (BMI) z scores at age 18-24 months. Additional research questions will focus on whether early intervention has an impact on children's dietary quality, TV viewing time, duration of breastfeeding and parenting styles. This protocol includes the hypotheses, inclusion criteria and outcome measures to be used in the IPD PMA. The sample size of the combined dataset at final outcome assessment (approximately 1800 infants) will allow greater precision when exploring differences in the effect of early intervention with respect to pre-specified participant- and intervention-level characteristics.

Discussion
Finalisation of the data collection procedures and analysis plans will be complete by the end of 2010. Data collection and analysis will occur during 2011-2012 and results should be available by 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the existence of prescribed frameworks, valuation remains a cause of much controversy and variety of opinion. It does not matter whether procedures are undertaken in exactly the same way, the conclusion of ‘value’ will vary from valuer to valuer – sometimes considerably. This uncertainty within valuation is founded on property’s heterogeneous nature and the imperfect market that is the property market; in addition to the unpredictability of human behaviour in making judgements (French and Gabrielli 2004). Uncertainty, in valuation is found in the amalgam of locational, physical and legal characteristics and innumerable other forces which control and energise the property market (Whipple 1995). Particular irregular occurrences, or drastic changes in property markets, from either within market evolution or external forces, for example the creation of global financial markets, cause further uncertainty for valuers and provides challenges in identifying ‘market value’ in valuation practice. The praxis of valuation in a commercial sense navigates this complexity using a combination of algorithms and heuristics to identify the value of a property. The application of theoretical mathematical algorithms based on economic theory (Brown 1995), is augmented by valuers’ ability to apply appropriate adjustment based on their knowledge of the market, their ability to analyse, assess and compare the attributes of a property in comparison to its market, and their practical experience (Sliogeriene 2008). Despite the necessity of algorithms, the application of appropriate adjustments and assumptions are important in arriving at a value. This paper is a critical reflection on the basis of valuation practice as guided by standards, methods, and ethics (algorithms), and the use of heuristics in practice. This is important because changes within property markets challenge the inter-relationship between these two aspects of valuation practice. Through the authors’ industry experience and a review of previous research and statements of practice norms this paper provides an analysis of the ability of valuers to address market change in their valuation practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of instance selection is to identify which instances (examples, patterns) in a large dataset should be selected as representatives of the entire dataset, without significant loss of information. When a machine learning method is applied to the reduced dataset, the accuracy of the model should not be significantly worse than if the same method were applied to the entire dataset. The reducibility of any dataset, and hence the success of instance selection methods, surely depends on the characteristics of the dataset, as well as the machine learning method. This paper adopts a meta-learning approach, via an empirical study of 112 classification datasets from the UCI Repository [1], to explore the relationship between data characteristics, machine learning methods, and the success of instance selection method.