30 resultados para Multiple-use forestry


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the use of general performance measures in evaluating specific planning and design decisions in higher education and reflects on the students' learning process. Specifically, it concerns the use of the MENTOR multimedia computer aided learning package for helping students learn about OR as part of a general business degree. It includes the transfer of responsibility for a learning module to a new staff member and a change from a single tutor to a system involving multiple tutors. Student satisfaction measures, learning outcome measures and MENTOR usage patterns are examined in monitoring the effects of the changes in course delivery. The results raise some questions about the effectiveness of general performance measures in supporting specific decisions relating to course design and planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ant colony optimisation algorithms model the way ants use pheromones for marking paths to important locations in their environment. Pheromone traces are picked up, followed, and reinforced by other ants but also evaporate over time. Optimal paths attract more pheromone and less useful paths fade away. The main innovation of the proposed Multiple Pheromone Ant Clustering Algorithm (MPACA) is to mark objects using many pheromones, one for each value of each attribute describing the objects in multidimensional space. Every object has one or more ants assigned to each attribute value and the ants then try to find other objects with matching values, depositing pheromone traces that link them. Encounters between ants are used to determine when ants should combine their features to look for conjunctions and whether they should belong to the same colony. This paper explains the algorithm and explores its potential effectiveness for cluster analysis. © 2014 Springer International Publishing Switzerland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this project was to investigate the neural correlates of reading impairment in dyslexia as hypothesised by the main theories – the phonological deficit, visual magnocellular deficit and cerebellar deficit theories, with emphasis on individual differences. This research took a novel approach by: 1) contrasting the predictions in one sample of participants with dyslexia (DPs); 2) using a multiple-case study (and between-group comparisons) to investigate differences in BOLD between each DP and the controls (CPs); 3) demonstrating a possible relationship between reading impairment and its hypothesised neural correlates by using fMRI and a reading task. The multiple-case study revealed that the neural correlates of reading in dyslexia in all cases are not in agreement with the predictions of a single theory. The results show striking individual differences - even, where the neural correlates of reading in two DPs are consistent with the same theory, the areas can differ. A DP can exhibit under-engagement in an area in word, but not in pseudoword reading and vice versa, demonstrating that underactivation in that area cannot be interpreted as a ‘developmental lesion’. Additional analyses revealed complex results. Within-group analyses between behavioural measures and BOLD showed correlations in the predicted regions, areas outside ROI, and lack of correlations in some predicted areas. Comparisons of subgroups which differed on Orthography Composite supported the MDT, but only for Words. The results suggest that phonological scores are not a sufficient predictor of the under-engagement of phonological areas during reading. DPs and CPs exhibited correlations between Purdue Pegboard Composite and BOLD in cerebellar areas only for Pseudowords. Future research into reading in dyslexia should use a more holistic approach, involving genetic and environmental factors, gene by environment interaction, and comorbidity with other disorders. It is argued that multidisciplinary research, within the multiple-deficit model holds significant promise here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To independently evaluate the impact of the second phase of the Health Foundation's Safer Patients Initiative (SPI2) on a range of patient safety measures. Design: A controlled before and after design. Five substudies: survey of staff attitudes; review of case notes from high risk (respiratory) patients in medical wards; review of case notes from surgical patients; indirect evaluation of hand hygiene by measuring hospital use of handwashing materials; measurement of outcomes (adverse events, mortality among high risk patients admitted to medical wards, patients' satisfaction, mortality in intensive care, rates of hospital acquired infection). Setting: NHS hospitals in England. Participants: Nine hospitals participating in SPI2 and nine matched control hospitals. Intervention The SPI2 intervention was similar to the SPI1, with somewhat modified goals, a slightly longer intervention period, and a smaller budget per hospital. Results: One of the scores (organisational climate) showed a significant (P=0.009) difference in rate of change over time, which favoured the control hospitals, though the difference was only 0.07 points on a five point scale. Results of the explicit case note reviews of high risk medical patients showed that certain practices improved over time in both control and SPI2 hospitals (and none deteriorated), but there were no significant differences between control and SPI2 hospitals. Monitoring of vital signs improved across control and SPI2 sites. This temporal effect was significant for monitoring the respiratory rate at both the six hour (adjusted odds ratio 2.1, 99% confidence interval 1.0 to 4.3; P=0.010) and 12 hour (2.4, 1.1 to 5.0; P=0.002) periods after admission. There was no significant effect of SPI for any of the measures of vital signs. Use of a recommended system for scoring the severity of pneumonia improved from 1.9% (1/52) to 21.4% (12/56) of control and from 2.0% (1/50) to 41.7% (25/60) of SPI2 patients. This temporal change was significant (7.3, 1.4 to 37.7; P=0.002), but the difference in difference was not significant (2.1, 0.4 to 11.1; P=0.236). There were no notable or significant changes in the pattern of prescribing errors, either over time or between control and SPI2 hospitals. Two items of medical history taking (exercise tolerance and occupation) showed significant improvement over time, across both control and SPI2 hospitals, but no additional SPI2 effect. The holistic review showed no significant changes in error rates either over time or between control and SPI2 hospitals. The explicit case note review of perioperative care showed that adherence rates for two of the four perioperative standards targeted by SPI2 were already good at baseline, exceeding 94% for antibiotic prophylaxis and 98% for deep vein thrombosis prophylaxis. Intraoperative monitoring of temperature improved over time in both groups, but this was not significant (1.8, 0.4 to 7.6; P=0.279), and there were no additional effects of SPI2. A dramatic rise in consumption of soap and alcohol hand rub was similar in control and SPI2 hospitals (P=0.760 and P=0.889, respectively), as was the corresponding decrease in rates of Clostridium difficile and meticillin resistant Staphylococcus aureus infection (P=0.652 and P=0.693, respectively). Mortality rates of medical patients included in the case note reviews in control hospitals increased from 17.3% (42/243) to 21.4% (24/112), while in SPI2 hospitals they fell from 10.3% (24/233) to 6.1% (7/114) (P=0.043). Fewer than 8% of deaths were classed as avoidable; changes in proportions could not explain the divergence of overall death rates between control and SPI2 hospitals. There was no significant difference in the rate of change in mortality in intensive care. Patients' satisfaction improved in both control and SPI2 hospitals on all dimensions, but again there were no significant changes between the two groups of hospitals. Conclusions: Many aspects of care are already good or improving across the NHS in England, suggesting considerable improvements in quality across the board. These improvements are probably due to contemporaneous policy activities relating to patient safety, including those with features similar to the SPI, and the emergence of professional consensus on some clinical processes. This phenomenon might have attenuated the incremental effect of the SPI, making it difficult to detect. Alternatively, the full impact of the SPI might be observable only in the longer term. The conclusion of this study could have been different if concurrent controls had not been used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ant Colony Optimisation algorithms mimic the way ants use pheromones for marking paths to important locations. Pheromone traces are followed and reinforced by other ants, but also evaporate over time. As a consequence, optimal paths attract more pheromone, whilst the less useful paths fade away. In the Multiple Pheromone Ant Clustering Algorithm (MPACA), ants detect features of objects represented as nodes within graph space. Each node has one or more ants assigned to each feature. Ants attempt to locate nodes with matching feature values, depositing pheromone traces on the way. This use of multiple pheromone values is a key innovation. Ants record other ant encounters, keeping a record of the features and colony membership of ants. The recorded values determine when ants should combine their features to look for conjunctions and whether they should merge into colonies. This ability to detect and deposit pheromone representative of feature combinations, and the resulting colony formation, renders the algorithm a powerful clustering tool. The MPACA operates as follows: (i) initially each node has ants assigned to each feature; (ii) ants roam the graph space searching for nodes with matching features; (iii) when departing matching nodes, ants deposit pheromones to inform other ants that the path goes to a node with the associated feature values; (iv) ant feature encounters are counted each time an ant arrives at a node; (v) if the feature encounters exceed a threshold value, feature combination occurs; (vi) a similar mechanism is used for colony merging. The model varies from traditional ACO in that: (i) a modified pheromone-driven movement mechanism is used; (ii) ants learn feature combinations and deposit multiple pheromone scents accordingly; (iii) ants merge into colonies, the basis of cluster formation. The MPACA is evaluated over synthetic and real-world datasets and its performance compares favourably with alternative approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service innovations in retailing have the potential to benefit consumers as well as retailers. This research models key factors associated with the trial and continuous use of a specific self-service technology (SST), the personal shopping assistant (PSA), and estimates retailer benefits from implementing that innovation. Based on theoretical insights from prior SST studies, diffusion of innovation literature, and the technology acceptance model (TAM), this study develops specific hypotheses and tests them on a sample of 104 actual users of the PSA and 345 nonusers who shopped at the retail store offering the PSA device. Results indicate that factors affecting initial trial are different from those affecting continuous use. More specifically, consumers' trust toward the retailer, novelty seeking, and market mavenism are positively related to trial, while technology anxiety hinders the likelihood of trying the PSA. Perceived ease of use of the device positively impacts continuous use while consumers' need for interaction in shopping environments reduces the likelihood of continuous use. Importantly, there is evidence on retailer benefits from introducing the innovation since consumers using the PSA tend to spend more during each shopping trip. However, given the high costs of technology, the payback period for recovery of investments in innovation depends largely upon continued use of the innovation by consumers. Important implications are provided for retailers considering investments in new in-store service innovations. Incorporation of technology within physical stores affords opportunities for the retailer to reduce costs, while enhancing service provided to consumers. Therefore, service innovations in retailing have the potential to benefit consumers as well as retailers. This research models key factors associated with the trial and continuous use of a specific SST in the retail context, the PSA, and estimates retailer benefits from implementing that innovation. In so doing, the study contributes to the nascent area of research on SSTs in the retail sector. Based on theoretical insights from prior SST studies, diffusion of innovation literature, and the TAM, this study develops specific hypotheses regarding the (1) antecedent effects of technological anxiety, novelty seeking, market mavenism, and trust in the retailer on trial of the service innovation; (2) the effects of ease of use, perceived waiting time, and need for interaction on continuous use of the innovation; and (3) the effect of use of innovation on consumer spending at the store. The hypotheses were tested on a sample of 104 actual users of the PSA and 345 nonusers who shopped at the retail store offering the PSA device, one of the early adopters of PSA in Germany. Data were analyzed using logistic regression (antecedents of trial), multiple regression (antecedents of continuous use), and propensity score matching (assessing retailer benefits). Results indicate that factors affecting initial trial are different from those affecting continuous use. More specifically, consumers' trust toward the retailer, novelty seeking, and market mavenism are positively related to trial, while technology anxiety hinders the likelihood of trying the PSA. Perceived ease of use of the device positively impacts continuous use, while consumers' need for interaction in shopping environments reduces the likelihood of continuous use. Importantly, there is evidence on retailer benefits from introducing the innovation since consumers using the PSA tend to spend more during each shopping trip. However, given the high costs of technology, the payback period for recovery of investments in innovation depends largely upon continued use of the innovation by consumers. Important implications are provided for retailers considering investments in new in-store service innovations. The study contributes to the literature through its (1) simultaneous examination of antecedents of trial and continuous usage of a specific SST, (2) the demonstration of economic benefits of SST introduction for the retailer, and (3) contribution to the stream of research on service innovation, as against product innovation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cell-based therapies have the potential to contribute to global healthcare, whereby the use of living cells and tissues can be used as medicinal therapies. Despite this potential, many challenges remain before the full value of this emerging field can be realized. The characterization of input material for cell-based therapy bioprocesses from multiple donors is necessary to identify and understand the potential implications of input variation on process development. In this work, we have characterized bone marrow derived human mesenchymal stem cells (BM-hMSCs) from multiple donors and discussed the implications of the measurable input variation on the development of autologous and allogeneic cell-based therapy manufacturing processes. The range of cumulative population doublings across the five BM-hMSC lines over 30 days of culture was 5.93, with an 18.2% range in colony forming efficiency at the end of the culture process and a 55.1% difference in the production of interleukin-6 between these cell lines. It has been demonstrated that this variation results in a range in the process time between these donor hMSC lines for a hypothetical product of over 13 days, creating potential batch timing issues when manufacturing products from multiple patients. All BM-hMSC donor lines demonstrated conformity to the ISCT criteria but showed a difference in cell morphology. Metabolite analysis showed that hMSCs from the different donors have a range in glucose consumption of 26.98 pmol cell−1 day−1, Lactate production of 29.45 pmol cell−1 day−1 and ammonium production of 1.35 pmol cell−1 day−1, demonstrating the extent of donor variability throughout the expansion process. Measuring informative product attributes during process development will facilitate progress towards consistent manufacturing processes, a critical step in the translation cell-based therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While storytelling in conversation has been extensively investigated, much less is known about storytelling in the English language classroom, particularly teachers telling their personal experience stories, termed teacher personal narratives in this study. Teacher personal narratives, a combination of the ancient art of human storytelling and the current practices of teaching, offer an innovative approach to language teaching and learning. This thesis examines teacher personal narrative use in Japanese university English language classrooms and is of relevance to both practicing classroom teachers and teacher educators because it explores the role, significance, and effectiveness of personal stories told by teachers. The pedagogical implications which the findings may have for language teaching and learning as well as for teacher education programs are also discussed. Four research questions were posed: 1. What are the characteristics of teacher personal narratives? 2. When, how, and why do language teachers use personal narratives in the classroom? 3. What is the reaction of learners to teacher personal narratives? 4. How do teacher personal narratives provide opportunities for student learning? A mixed methods approach using the tradition of multiple case studies provided an in-depth exploration of the personal narratives of four teachers. Data collection consisted of classroom observations and audio recordings, teacher and student semi-structured interviews, student diaries, and Japan-wide teacher questionnaires. Ninety-seven teacher personal narratives were analyzed for their structural and linguistic features. The findings showed that the narrative elements of orientation, complication, and evaluation are almost always present in these stories, and that discourse and tense markers may aid in student noticing of the input which can lead to eventual student output. The data also demonstrated that reasons for telling narratives mainly fall into two categories: affectiveoriented and pedagogical-oriented purposes. This study has shown that there are significant differences between conversational storytelling and educational storytelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In our recent work in different bioreactors up to 2.5L in scale, we have successfully cultured hMSCs using the minimum agitator speed required for complete microcarrier suspension, N JS. In addition, we also reported a scaleable protocol for the detachment from microcarriers in spinner flasks of hMSCs from two donors. The essence of the protocol is the use of a short period of intense agitation in the presence of enzymes such that the cells are detached; but once detachment is achieved, the cells are smaller than the Kolmogorov scale of turbulence and hence not damaged. Here, the same approach has been effective for culture at N JS and detachment in-situ in 15mL ambr™ bioreactors, 100mL spinner flasks and 250mL Dasgip bioreactors. In these experiments, cells from four different donors were used along with two types of microcarrier with and without surface coatings (two types), four different enzymes and three different growth media (with and without serum), a total of 22 different combinations. In all cases after detachment, the cells were shown to retain their desired quality attributes and were able to proliferate. This agitation strategy with respect to culture and harvest therefore offers a sound basis for a wide range of scales of operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Belief-desire reasoning is a core component of 'Theory of Mind' (ToM), which can be used to explain and predict the behaviour of agents. Neuroimaging studies reliably identify a network of brain regions comprising a 'standard' network for ToM, including temporoparietal junction and medial prefrontal cortex. Whilst considerable experimental evidence suggests that executive control (EC) may support a functioning ToM, co-ordination of neural systems for ToM and EC is poorly understood. We report here use of a novel task in which psychologically relevant ToM parameters (true versus false belief; approach versus avoidance desire) were manipulated orthogonally. The valence of these parameters not only modulated brain activity in the 'standard' ToM network but also in EC regions. Varying the valence of both beliefs and desires recruits anterior cingulate cortex, suggesting a shared inhibitory component associated with negatively valenced mental state concepts. Varying the valence of beliefs additionally draws on ventrolateral prefrontal cortex, reflecting the need to inhibit self perspective. These data provide the first evidence that separate functional and neural systems for EC may be recruited in the service of different aspects of ToM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From 1992 to 2012 4.4 billion people were affected by disasters with almost 2 trillion USD in damages and 1.3 million people killed worldwide. The increasing threat of disasters stresses the need to provide solutions for the challenges faced by disaster managers, such as the logistical deployment of resources required to provide relief to victims. The location of emergency facilities, stock prepositioning, evacuation, inventory management, resource allocation, and relief distribution have been identified to directly impact the relief provided to victims during the disaster. Managing appropriately these factors is critical to reduce suffering. Disaster management commonly attracts several organisations working alongside each other and sharing resources to cope with the emergency. Coordinating these agencies is a complex task but there is little research considering multiple organisations, and none actually optimising the number of actors required to avoid shortages and convergence. The aim of the this research is to develop a system for disaster management based on a combination of optimisation techniques and geographical information systems (GIS) to aid multi-organisational decision-making. An integrated decision system was created comprising a cartographic model implemented in GIS to discard floodable facilities, combined with two models focused on optimising the decisions regarding location of emergency facilities, stock prepositioning, the allocation of resources and relief distribution, along with the number of actors required to perform these activities. Three in-depth case studies in Mexico were studied gathering information from different organisations. The cartographic model proved to reduce the risk to select unsuitable facilities. The preparedness and response models showed the capacity to optimise the decisions and the number of organisations required for logistical activities, pointing towards an excess of actors involved in all cases. The system as a whole demonstrated its capacity to provide integrated support for disaster preparedness and response, along with the existence of room for improvement for Mexican organisations in flood management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the use of web-based textbook supplementary teaching and learning materials which include multiple choice test banks, animated demonstrations, simulations, quizzes and electronic versions of the text. To gauge their experience of the web-based material students were asked to score the main elements of the material in terms of usefulness. In general it was found that while the electronic text provides a flexible platform for presentation of material there is a need for continued monitoring of student use of this material as the literature suggests that digital viewing habits may mean there is little time spent in evaluating information, either for relevance, accuracy or authority. From a lecturer perspective these materials may provide an effective and efficient way of presenting teaching and learning materials to the students in a variety of multimedia formats, but at this stage do not overcome the need for a VLE such as Blackboard™.