952 resultados para Strategy formulation
Resumo:
Developing learning, teaching and assessment strategies that foster ongoing engagement and provide inspiration to academic staff is a particular challenge. This paper demonstrates how an institutional learning, teaching and assessment strategy was developed and a ‘dynamic’ strategy created in order to achieve the ongoing enhancement of the quality of the student learning experience. The authors use the discussion of the evolution, development and launch of the Strategy and underpinning Resource Bank to reflect on the hopes and intentions behind the approach; firstly the paper will discuss the collaborative and iterative approach taken to the development of an institutional learning, teaching and assessment strategy; and secondly, the development of open access educational resources to underpin the strategy. The paper then outlines staff engagement with the resource bank and positive outcomes which have been identified to date, identifies the next steps in achieving the ambition behind the strategy and outlines the action research and fuller evaluation which will be used to monitor progress and ensure responsive learning at institutional level.
Resumo:
This paper reprots on the use of banchmarking to improve the links between business and operations strategies. The use of benchmarking as a toll to facilitate improvements in these crucial links is examined. The existing literature on process benchmarking is used to from a structured questionnaire to apply to six case studies of major manuifacturing companies. Four of these case studies are presented in this paper to highlight the use of benchmarking in this application. Initial researh results are presented drawing upon the critical success factors indentified both in the literature and on the case results. Recommendations for further work are outlined
Resumo:
This paper reports on the use of benchmarking to improve the links between business and operations strategies. The use of benchmarking as a tool to facilitate improvement in these crucial links is examined. The existing literature on process benchmarking is used to form a structured questionnaire to apply to six case studies of major maunfacturing companies. Four of these case studies are presented drawing upon the critical success factors identified both in the literature and on the case results. Recommendations for further work are outlined.
Resumo:
Thomas, R., Spink, S., Durbin, J. & Urquhart, C. (2005). NHS Wales user needs study including knowledgebase tools report. Report for Informing Healthcare Strategy implementation programme. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: Informing Healthcare, NHS Wales
Resumo:
Thomas, R. & Urquhart, C. NHS Wales e-library portal evaluation. (For Informing Healthcare Strategy implementation programme). Aberystwyth: Department of Information Studies, University of Wales Aberystwyth Follow-on to NHS Wales User Needs study Sponsorship: Informing Healthcare, NHS Wales
Resumo:
Depression is a common but frequently undiagnosed feature in individuals with HIV infection. To find a strategy to detect depression in a non-specialized clinical setting, the overall performance of the Hospital Anxiety and Depression Scale (HADS) and the depression identification questions proposed by the European AIDS Clinical Society (EACS) guidelines were assessed in a descriptive cross-sectional study of 113 patients with HIV infection. The clinician asked the two screening questions that were proposed under the EACS guidelines and requested patients to complete the HADS. A psychiatrist or psychologist administered semi-structured clinical interviews to yield psychiatric diagnoses of depression (gold standard). A receiver operating characteristic (ROC) analysis for the HADS-Depression (HADS-D) subscale indicated that the best sensitivity and specificity were obtained between the cut-off points of 5 and 8, and the ROC curve for the HADS-Total (HADS-T) indicated that the best cut-off points were between 12 and 14. There were no statistically significant differences in the correlations of the EACS (considering positive responses to one [A] or both questions [B]), the HADS-D ≥ 8 or the HADS-T ≥ 12 with the gold standard. The study concludes that both approaches (the two EACS questions and the HADS-D subscale) are appropriate depression-screening methods in HIV population. We believe that using the EACS-B and the HADS-D subscale in a two-step approach allows for rapid, assumable and accurate clinical diagnosis in non-psychiatric hospital settings.
Resumo:
Jackson, R. (2007). Language, Policy and the Construction of a Torture Culture in the War on Terrorism. Review of International Studies. 33(3), pp.353-371 RAE2008
Resumo:
Morgan, R.; Strong, C.; and McGuinness, T. (2003). Product-market positioning and prospector strategy: An analysis of strategic patterns from the resource-based perspective. European Journal of Marketing. 37(10), pp.1409-1439 RAE2008
Resumo:
McGuinness, T. and Morgan, R. (2005). The effect of market and learning orientation on strategy dynamics: The contributing effect of organisational change capability. European Journal of Marketing. 39(11-12), pp.1306-1326 RAE2008
Resumo:
Concentrating solar power is an important way of providing renewable energy. Model simulation approaches play a fundamental role in the development of this technology and, for this, an accurately validation of the models is crucial. This work presents the validation of the heat loss model of the absorber tube of a parabolic trough plant by comparing the model heat loss estimates with real measurements in a specialized testing laboratory. The study focuses on the implementation in the model of a physical-meaningful and widely valid formulation of the absorber total emissivity depending on the surface’s temperature. For this purpose, the spectral emissivity of several absorber’s samples are measured and, with these data, the absorber total emissivity curve is obtained according to Planck function. This physical-meaningful formulation is used as input parameter in the heat loss model and a successful validation of the model is performed. Since measuring the spectral emissivity of the absorber surface may be complex and it is sample-destructive, a new methodology for the absorber’s emissivity characterization is proposed. This methodology provides an estimation of the absorber total emissivity, retaining its physical meaning and widely valid formulation according to Planck function with no need for direct spectral measurements. This proposed method is also successfully validated and the results are shown in the present paper.
Resumo:
Celem niniejszego opracowania jest analiza założeń polityki ochrony cyberprzestrzeni RP zaprezentowanych w dokumencie zatytułowanym Polityka ochrony cyberprzestrzeni Rzeczypospolitej Polskiej, zaprezentowanym w 2013 r. przez Ministerstwo Administracji i Cyfryzacji i Agencję Bezpieczeństwa Wewnętrznego. Artykuł poddaje analizie postulaty i wytyczne tam zamieszczone, jak również konfrontuje te założenia z elementami systemu ochrony cyberprzestrzeni RP. Zgodzić należy się z twórcami tej strategii, iż zapewnienie stanu pełnego bezpieczeństwa teleinformatycznego, jest niemożliwe. Można mówić jedynie osiągnięciu pewnego, akceptowalnego jego poziomu. Wydaje się, że do osiągnięcia tego celu, powinna w znaczącym stopniu przyczynić się realizacja priorytetów polityki ochrony cyberprzestrzeni RP, a wśród nich w szczególności określenie kompetencji podmiotów odpowiedzialnych za bezpieczeństwo cyberprzestrzeni, stworzenie i realizacja spójnego dla wszystkich podmiotów administracji rządowej systemu zarządzania bezpieczeństwem cyberprzestrzeni oraz ustanowienie wytycznych w tym zakresie dla podmiotów niepublicznych, stworzenie trwałego systemu koordynacji i wymiany informacji pomiędzy podmiotami odpowiedzialnymi za bezpieczeństwo cyberprzestrzeni i użytkownikami cyberprzestrzeni, zwiększenie świadomości użytkowników cyberprzestrzeni w zakresie metod i środków bezpieczeństwa.
Resumo:
Object detection can be challenging when the object class exhibits large variations. One commonly-used strategy is to first partition the space of possible object variations and then train separate classifiers for each portion. However, with continuous spaces the partitions tend to be arbitrary since there are no natural boundaries (for example, consider the continuous range of human body poses). In this paper, a new formulation is proposed, where the detectors themselves are associated with continuous parameters, and reside in a parameterized function space. There are two advantages of this strategy. First, a-priori partitioning of the parameter space is not needed; the detectors themselves are in a parameterized space. Second, the underlying parameters for object variations can be learned from training data in an unsupervised manner. In profile face detection experiments, at a fixed false alarm number of 90, our method attains a detection rate of 75% vs. 70% for the method of Viola-Jones. In hand shape detection, at a false positive rate of 0.1%, our method achieves a detection rate of 99.5% vs. 98% for partition based methods. In pedestrian detection, our method reduces the miss detection rate by a factor of three at a false positive rate of 1%, compared with the method of Dalal-Triggs.
Resumo:
Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.
Resumo:
In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.