886 resultados para Optimal test set
Resumo:
In Australian universities, journalism educators usually come to the academy from the journalism profession and consequently place a high priority on leading students to develop a career-focussed skill set. The changing nature of the technological, political and economic environments and the professional destinations of journalism graduates place demands on journalism curricula and educators alike. The profession is diverse, such that the better description is of many ‘journalisms’ rather than one ‘journalism’ with consequential pressures being placed on curricula to extend beyond the traditional skill set, where practical ‘writing’ and ‘editing’ skills dominate, to the incorporation of critical theory and the social construction of knowledge. A parallel set of challenges faces academic staff operating in a higher education environment where change is the only constant and research takes precedent over curriculum development. In this paper, three educators at separate universities report on their attempts to implement curriculum change to imbue graduates with better skills and attributes such as enhanced team work, problem solving and critical thinking, to operate in the divergent environment of 21st century journalism. The paper uses narrative case study to illustrate the different approaches. Data collected from formal university student evaluations inform the narratives along with rich but less formal qualitative data including anecdotal student comments and student reflective assessment presentations. Comparison of the three approaches illustrates the dilemmas academic staff face when teaching in disciplines that are impacted by rapid changes in technology requiring new pedagogical approaches. Recommendations for future directions are considered against the background or learning purpose.
Resumo:
The learning experiences of student nurses undertaking clinical placement are reported widely, however little is known about the learning experiences of health professionals undertaking continuing professional development (CPD) in a clinical setting, especially in palliative care. The aim of this study, which was conducted as part of the national evaluation of a professional development program involving clinical attachments with palliative care services (The Program of Experience in the Palliative Approach [PEPA]), was to explore factors influencing the learning experiences of participants over time. Thirteen semi-structured, one-to-one telephone interviews were conducted with five participants throughout their PEPA experience. The analysis was informed by the traditions of adult, social and psychological learning theories and relevant literature. The participants' learning was enhanced by engaging interactively with host site staff and patients, and by the validation of their personal and professional life experiences together with the reciprocation of their knowledge with host site staff. Self-directed learning strategies maximised the participants' learning outcomes. Inclusion in team activities aided the participants to feel accepted within the host site. Personal interactions with host site staff and patients shaped this social/cultural environment of the host site. Optimal learning was promoted when participants were actively engaged, felt accepted and supported by, and experienced positive interpersonal interactions with, the host site staff.
Resumo:
As the problems involving infrastructure delivery have become more complex and contentious, there has been an acknowledgement that these problems cannot be resolved by any one body working alone. This understanding has driven multi-sectoral collaboration and has led to an expansion of the set of actors, including stakeholders, who are now involved in delivery of infrastructure projects and services. However, more needs to be understood about how to include stakeholders in these processes and ways of developing the requisite combination of stakeholders to achieve effective outcomes. This thesis draws on stakeholder theory and governance network theory to obtain insights into how three multi-level networks within the Roads Alliance in Queensland engage with stakeholders in the delivery of complex and sensitive infrastructure services and projects. New knowledge about stakeholders will be obtained by testing a model of Stakeholder Salience and Engagement which combines and extends the stakeholder identification and salience theory, ladder of stakeholder management and engagement and the model of stakeholder engagement and moral treatment of stakeholders. By applying this model, the broad research question: “Who or what decides how stakeholders are engaged by governance networks delivering public outcomes?” will be addressed. The case studies will test a theoretical model of stakeholder salience and engagement which links strategic decisions about stakeholder salience with the quality and quantity of engagement strategies for engaging different types of stakeholders. A multiple embedded case study design has been selected as the overall approach to explore, describe, explain and evaluate how stakeholder engagement occurs in three governance networks delivering road infrastructure in Queensland. The research design also incorporates a four stage approach to data collection: observations, stakeholder analysis, telephone survey questionnaire and semi-structured interviews. The outcomes of this research will contribute to and extend stakeholder theory by showing how stakeholder salience impacts on decisions about the types of engagement processes implemented. Governance network theory will be extended by showing how governance networks interact with stakeholders through the concepts of stakeholder salience and engagement. From a practical perspective this research will provide governance networks with an indication of how to optimise engagement with different types of stakeholders. 2
Resumo:
about 82 million immigrants in the OECD area; and worldwide, there are about 191 million immigrants and displaced persons, and some 30-40 million unauthorised immigrants. Also according to recent OECD report, little in-depth research has been carried out to-date to help decision makers in government, business, and society at large, to better understand the complexities and wider consequences of future migration flows. Literatures have also indicated that the lack of a skilled population in muchneeded occupations in countries of destination have contributed to the need to bring in skilled foreign workers. Furthermore, despite current global financial crisis, some areas of occupation are in need of skilled workers such that in a job-scarce environment jobs become fewer and employers are more likely to demand skilled workers from both natives and immigrants. Global competition for labour is expected to intensify, especially for top talent, highly qualified and semi-skilled individuals. This exacerbate the problems faced by current skilled immigrants and skilled refugees, particularly those from non-main English speaking countries who are not employed at optimal skill level in countries of destination. The research study investigates whether skilled immigrants are being effectively utilised in their countries of destination, in the context of employment. In addition to skilled immigrants, data sampling will also include skilled refugees who, although arriving under the humanitarian program, possess formal qualifications from their country of origin. Underlying variables will be explored such as the strength of social capital or interpersonal ties; and human capital in terms of educational attainment and proficiency in the English Language. The aim of the study is to explain the relationship between the variables; and whether the variables influence the employment outcomes. A broad-ranging preliminary literature review has been undertaken to explore the substantial bodies of knowledge on skilled immigrants worldwide, including skilled refugees; and to investigate whether the utilisation issues are universal or specific to a country. In addition, preliminary empirical research and analysis has been undertaken, to set the research focus and to identify the problems beyond literature. Preliminary findings have indicated that immigrants and refugees from non-main English speaking countries are particularly impacted by employment issues regardless of their skills and qualifications acquired in their country of origins; compared with immigrants from main-English speaking countries. Preliminary findings from the literature review also indicate that gaps in knowledge still exist. Although the past two decades have witnessed a virtual explosion of theory and research on international migration, no in-depth research has been located that specifically links immigrants and refugees social and human capitals in terms of employment outcomes. This research study aims to fill these gaps in research; and subsequently contribute to contemporary body of knowledge in literatures on the utilisation of skilled immigrants and skilled refugees, specifically those from non-main English speaking countries. A mixed methods design will be used, which incorporates techniques from both quantitative and qualitative research traditions that will be triangulated at the end of the data collection stage.
Resumo:
The selection of projects and programs of work is a key function of both public and private sector organisations. Ideally, projects and programs that are selected to be undertaken are consistent with strategic objectives for the organisation; will provide value for money and return on investment; will be adequately resourced and prioritised; will not compete with general operations for resources and not restrict the ability of operations to provide income to the organisation; will match the capacity and capability of the organisation to deliver; and will produce outputs that are willingly accepted by end users and customers. Unfortunately,this is not always the case. Possible inhibitors to optimal project portfolio selection include: processes that are inconsistent with the needs of the organisation; reluctance to use an approach that may not produce predetermined preferences; loss of control and perceived decision making power; reliance on quantitative methods rather than qualitative methods for justification; ineffective project and program sponsorship; unclear project governance, processes and linkage to business strategies; ignorance, taboos and perceived effectiveness; inadequate education and training about the processes and their importance.
Resumo:
This paper presents early results from a pilot project which aims to investigate the relationship between proprietary structure of small and medium- sized Italian family firms and their owners’ orientation towards a “business evaluation process”. Evidence from many studies point out the importance of family business in a worldwide economic environment: in Italy 93% of the businesses are represented by family firms; 98% of them have less than 50 employees (Italian Association of Family Firms, 2004) so we judged family SMEs as a relevant field of investigation. In this study we assume a broad definition of family business as “a firm whose control (50% of shares or voting rights) is closely held by the members of the same family” (Corbetta,1995). “Business evaluation process” is intended here both as “continuous evaluation process” (which is the expression of a well developed managerial attitude) or as an “immediate valuation” (i.e. in the case of new shareholder’s entrance, share exchange among siblings, etc). We set two hypotheses to be tested in this paper: the first is “quantitative” and aims to verify whether the number of owners (independent variable) in a family firm is positively correlated to the business evaluation process. If a family firm is led by only one subject, it is more likely that personal values, culture and feelings may affect his choices more than “purely economic opportunities”; so there is less concern about monitoring economic performance or about the economic value of the firm. As the shareholders’ number increases, economic aspects in managing the firm grow in importance over the personal values and "value orientation" acquires a central role. The second hypothesis investigates if and to what extent the presence of “non- family members” among the owners affects their orientation to the business evaluation process. The “Cramer’s V” test has been used to test the hypotheses; both were not confirmed from these early results; next steps will lead to make an inferential analysis on a representative sample of the population.
Resumo:
A SNP genotyping method was developed for E. faecalis and E. faecium using the 'Minimum SNPs' program. SNP sets were interrogated using allele-specific real-time PCR. SNP-typing sub-divided clonal complexes 2 and 9 of E. faecalis and 17 of E. faecium, members of which cause the majority of nosocomial infections globally.
Resumo:
The 1990 European Community was taken by surprise, by the urgency of demands from the newly-elected Eastern European governments to become member countries. Those governments were honouring the mass social movement of the streets, the year before, demanding free elections and a liberal economic system associated with “Europe”. The mass movement had actually been accompanied by much activity within institutional politics, in Western Europe, the former “satellite” states, the Soviet Union and the United States, to set up new structures – with German reunification and an expanded EC as the centre-piece. This paper draws on the writer’s doctoral dissertation on mass media in the collapse of the Eastern bloc, focused on the Berlin Wall – documenting both public protests and institutional negotiations. For example the writer as a correspondent in Europe from that time, recounts interventions of the German Chancellor, Helmut Kohl, at a European summit in Paris nine days after the “Wall”, and separate negotiations with the French President, Francois Mitterrand -- on the reunification, and EU monetary union after 1992. Through such processes, the “European idea” would receive fresh impetus, though the EU which eventuated, came with many altered expectations. It is argued here that as a result of the shock of 1989, a “social” Europe can be seen emerging, as a shared experience of daily life -- especially among people born during the last two decades of European consolidation. The paper draws on the author’s major research, in four parts: (1) Field observation from the strategic vantage point of a news correspondent. This includes a treatment of evidence at the time, of the wishes and intentions of the mass public (including the unexpected drive to join the European Community), and those of governments, (e.g. thoughts of a “Tienanmen Square solution” in East Berlin, versus the non-intervention policies of the Soviet leader, Mikhail Gorbachev). (2) A review of coverage of the crisis of 1989 by major news media outlets, treated as a history of the process. (3) As a comparison, and a test of accuracy and analysis; a review of conventional histories of the crisis appearing a decade later.(4) A further review, and test, provided by journalists responsible for the coverage of the time, as reflection on practice – obtained from semi-structured interviews.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Human mesenchymal stem cells (hMSCs) possess great therapeutic potential for the treatment of bone disease and fracture non-union. Too often however, in vitro evidence alone of the interaction between hMSCs and the biomaterial of choice is used as justification for continued development of the material into the clinic. Clearly for hMSC-based regenerative medicine to be successful for the treatment of orthopaedic trauma, it is crucial to transplant hMSCs with a suitable carrier that facilitates their survival, optimal proliferation and osteogenic differentiation in vitro and in vivo. This motivated us to evaluate the use of polycaprolactone-20% tricalcium phosphate (PCL-TCP) scaffolds produced by fused deposition modeling for the delivery of hMSCs. When hMSCs were cultured on the PCL-TCP scaffolds and imaged by a combination of phase contrast, scanning electron and confocal laser microscopy, we observed five distinct stages of colonization over a 21-day period that were characterized by cell attachment, spreading, cellular bridging, the formation of a dense cellular mass and the accumulation of a mineralized extracellular matrix when induced with osteogenic stimulants. Having established that PCL-TCP scaffolds are able to support hMSC proliferation and osteogenic differentiation, we next tested the in vivo efficacy of hMSC-loaded PCL-TCP scaffolds in nude rat critical-sized femoral defects. We found that fluorescently labeled hMSCs survived in the defect site for up to 3 weeks post-transplantation. However, only 50% of the femoral defects treated with hMSCs responded favorably as determined by new bone volume. As such, we show that verification of hMSC viability and differentiation in vitro is not sufficient to predict the efficacy of transplanted stem cells to consistently promote bone formation in orthotopic defects in vivo.
Resumo:
OBJECTIVE: The accurate quantification of human diabetic neuropathy is important to define at-risk patients, anticipate deterioration, and assess new therapies. ---------- RESEARCH DESIGN AND METHODS: A total of 101 diabetic patients and 17 age-matched control subjects underwent neurological evaluation, neurophysiology tests, quantitative sensory testing, and evaluation of corneal sensation and corneal nerve morphology using corneal confocal microscopy (CCM). ---------- RESULTS: Corneal sensation decreased significantly (P = 0.0001) with increasing neuropathic severity and correlated with the neuropathy disability score (NDS) (r = 0.441, P < 0.0001). Corneal nerve fiber density (NFD) (P < 0.0001), nerve fiber length (NFL), (P < 0.0001), and nerve branch density (NBD) (P < 0.0001) decreased significantly with increasing neuropathic severity and correlated with NDS (NFD r = −0.475, P < 0.0001; NBD r = −0.511, P < 0.0001; and NFL r = −0.581, P < 0.0001). NBD and NFL demonstrated a significant and progressive reduction with worsening heat pain thresholds (P = 0.01). Receiver operating characteristic curve analysis for the diagnosis of neuropathy (NDS >3) defined an NFD of <27.8/mm2 with a sensitivity of 0.82 (95% CI 0.68–0.92) and specificity of 0.52 (0.40–0.64) and for detecting patients at risk of foot ulceration (NDS >6) defined a NFD cutoff of <20.8/mm2 with a sensitivity of 0.71 (0.42–0.92) and specificity of 0.64 (0.54–0.74). ---------- CONCLUSIONS: CCM is a noninvasive clinical technique that may be used to detect early nerve damage and stratify diabetic patients with increasing neuropathic severity. Established diabetic neuropathy leads to pain and foot ulceration. Detecting neuropathy early may allow intervention with treatments to slow or reverse this condition (1). Recent studies suggested that small unmyelinated C-fibers are damaged early in diabetic neuropathy (2–4) but can only be detected using invasive procedures such as sural nerve biopsy (4,5) or skin-punch biopsy (6–8). Our studies have shown that corneal confocal microscopy (CCM) can identify early small nerve fiber damage and accurately quantify the severity of diabetic neuropathy (9–11). We have also shown that CCM relates to intraepidermal nerve fiber loss (12) and a reduction in corneal sensitivity (13) and detects early nerve fiber regeneration after pancreas transplantation (14). Recently we have also shown that CCM detects nerve fiber damage in patients with Fabry disease (15) and idiopathic small fiber neuropathy (16) when results of electrophysiology tests and quantitative sensory testing (QST) are normal. In this study we assessed corneal sensitivity and corneal nerve morphology using CCM in diabetic patients stratified for the severity of diabetic neuropathy using neurological evaluation, electrophysiology tests, and QST. This enabled us to compare CCM and corneal esthesiometry with established tests of diabetic neuropathy and define their sensitivity and specificity to detect diabetic patients with early neuropathy and those at risk of foot ulceration.
Resumo:
The chapter investigates Shock Control Bumps (SCB) on a Natural Laminar Flow (NLF) aerofoil; RAE 5243 for Active Flow Control (AFC). A SCB approach is used to decelerate supersonic flow on the suction/pressure sides of transonic aerofoil that leads delaying shock occurrence or weakening of shock strength. Such an AFC technique reduces significantly the total drag at transonic speeds. This chapter considers the SCB shape design optimisation at two boundary layer transition positions (0 and 45%) using an Euler software coupled with viscous boundary layer effects and robust Evolutionary Algorithms (EAs). The optimisation method is based on a canonical Evolution Strategy (ES) algorithm and incorporates the concepts of hierarchical topology and parallel asynchronous evaluation of candidate solution. Two test cases are considered with numerical experiments; the first test deals with a transition point occurring at the leading edge and the transition point is fixed at 45% of wing chord in the second test. Numerical results are presented and it is demonstrated that an optimal SCB design can be found to significantly reduce transonic wave drag and improves lift on drag (L/D) value when compared to the baseline aerofoil design.
Resumo:
These cards are designed as a resource for implementing participatory action research (PAR) in social programs. Each card covers one of the five key stages of PAR as outlined in the manual 'On PAR- Using participatory Action Research to Improve Early Intervention' (Crane and O'Regan 2010).
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
Process modeling is an emergent area of Information Systems research that is characterized through an abundance of conceptual work with little empirical research. To fill this gap, this paper reports on the development and validation of an instrument to measure user acceptance of process modeling grammars. We advance an extended model for a multi-stage measurement instrument development procedure, which incorporates feedback from both expert and user panels. We identify two main contributions: First, we provide a validated measurement instrument for the study of user acceptance of process modeling grammars, which can be used to assist in further empirical studies that investigate phenomena associated with the business process modeling domain. Second, in doing so, we describe in detail a procedural model for developing measurement instruments that ensures high levels of reliability and validity, which may assist fellow scholars in executing their empirical research.