884 resultados para test data generation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aim. Our aim in this paper is to explain a methodological/methods package devised to incorporate situational and social world mapping with frame analysis, based on a grounded theory study of Australian rural nurses' experiences of mentoring. Background. Situational analysis, as conceived by Adele Clarke, shifts the research methodology of grounded theory from being located within a postpositivist paradigm to a postmodern paradigm. Clarke uses three types of maps during this process: situational, social world and positional, in combination with discourse analysis. Method. During our grounded theory study, the process of concurrent interview data generation and analysis incorporated situational and social world mapping techniques. An outcome of this was our increased awareness of how outside actors influenced participants in their constructions of mentoring. In our attempts to use Clarke's methodological package, however, it became apparent that our constructivist beliefs about human agency could not be reconciled with the postmodern project of discourse analysis. We then turned to the literature on symbolic interactionism and adopted frame analysis as a method to examine the literature on rural nursing and mentoring as secondary form of data. Findings. While we found situational and social world mapping very useful, we were less successful in using positional maps. In retrospect, we would argue that collective action framing provides an alternative to analysing such positions in the literature. This is particularly so for researchers who locate themselves within a constructivist paradigm, and who are therefore unwilling to reject the notion of human agency and the ability of individuals to shape their world in some way. Conclusion. Our example of using this package of situational and social worlds mapping with frame analysis is intended to assist other researchers to locate participants more transparently in the social worlds that they negotiate in their everyday practice. © 2007 Blackwell Publishing Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aim. This paper is a report of a study to explore rural nurses' experiences of mentoring. Background. Mentoring has recently been proposed by governments, advocates and academics as a solution to the problem for retaining rural nurses in the Australian workforce. Action in the form of mentor development workshops has changed the way that some rural nurses now construct supportive relationships as mentoring. Method. A grounded theory design was used with nine rural nurses. Eleven semi-structured interviews were conducted in various states of Australia during 2004-2005. Situational analysis mapping techniques and frame analysis were used in combination with concurrent data generation and analysis and theoretical sampling. Findings. Experienced rural nurses cultivate novices through supportive mentoring relationships. The impetus for such relationships comes from their own histories of living and working in the same community, and this was termed 'live my work'. Rural nurses use multiple perspectives of self in order to manage their interactions with others in their roles as community members, consumers of healthcare services and nurses. Personal strategies adapted to local context constitute the skills that experienced rural nurses pass-on to neophyte rural nurses through mentoring, while at the same time protecting them through troubleshooting and translating local cultural norms. Conclusion. Living and working in the same community creates a set of complex challenges for novice rural nurses that are better faced with a mentor in place. Thus, mentoring has become an integral part of experienced rural nurses' practice to promote staff retention. © 2007 The Authors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many ageing road bridges, particularly timber bridges, require urgent improvement due to the demand imposed by the recent version of the Australian bridge loading code, AS 5100. As traffic volume plays a key role in the decision of budget allocations for bridge refurbishment/ replacement, many bridges in low volume traffic network remain in poor condition with axle load and/ or speed restrictions, thus disadvantaging many rural communities. This thesis examines an economical and environmentally sensible option of incorporating disused flat rail wagons (FRW) in the construction of bridges in low volume, high axle load road network. The constructability, economy and structural adequacy of the FRW road bridge is reported in the thesis with particular focus of a demonstration bridge commissioned in regional Queensland. The demonstration bridge comprises of a reinforced concrete slab (RCS) pavement resting on two FRWs with custom designed connection brackets at regular intervals along the span of the bridge. The FRW-RC bridge deck assembly is supported on elastomeric rubber pads resting on the abutment. As this type of bridge replacement technology is new and its structural design is not covered in the design standards, the in-service structural performance of the FRW bridge subjected to the high axle loadings prescribed in AS 5100 is examined through performance load testing. Both the static and the moving load tests are carried out using a fully laden commonly available three-axle tandem truck. The bridge deck is extensively strain gauged and displacement at several key locations is measured using linear variable displacement transducers (LVDTs). A high speed camera is used in the performance test and the digital image data are analysed using proprietary software to capture the locations of the wheel positions on the bridge span accurately. The wheel location is thus synchronised with the displacement and strain time series to infer the structural response of the FRW bridge. Field test data are used to calibrate a grillage model, developed for further analysis of the FRW bridge to various sets of high axle loads stipulated in the bridge design standard. Bridge behaviour predicted by the grillage model has exemplified that the live load stresses of the FRW bridge is significantly lower than the yield strength of steel and the deflections are well below the serviceability limit state set out in AS 5100. Based on the results reported in this thesis, it is concluded that the disused FRWs are competent to resist high axle loading prescribed in AS 5100 and are a viable alternative structural solution of bridge deck in the context of the low volume road networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A road bridge containing disused flatbed rail wagons as the primary deck superstructure was performance tested in a low volume, high axle load traffic road in Queensland, Australia; some key results are presented in this paper. A fully laden truck of total weight 28.88 % of the serviceability design load prescribed in the Australian bridge code was used; its wheel positions were accurately captured using a high speed camera and synchronised with the real‐time deflections and strains measured at the critical members of the flat rail wagons. The strains remained well below the yield and narrated the existence of composite action between the reinforced concrete slab pavement and the wagon deck. A three dimensional grillage model was developed and calibrated using the test data, which established the structural adequacy of the rail wagons and the positive contribution of the reinforced concrete slab pavement to resist high axle traffic loads on a single lane bridge in the low volume roads network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a strategy for delayed research method selection in a qualitative interpretivist research. An exemplary case details how explorative interviews were designed and conducted in accordance with a paradigm prior to deciding whether to adopt grounded theory or phenomenology for data analysis. The focus here is to determine the most appropriate research strategy in this case the methodological framing to conduct research and represent findings, both of which are detailed. Research addressing current management issues requires both a flexible framework and the capability to consider the research problem from various angles, to derive tangible results for academia with immediate application to business demands. Researchers, and in particular novices, often struggle to decide on an appropriate research method suitable to address their research problem. This often applies to interpretative qualitative research where it is not always immediately clear which is the most appropriate method to use, as the research objectives shift and crystallize over time. This paper uses an exemplary case to reveal how the strategy for delayed research method selection contributes to deciding whether to adopt grounded theory or phenomenology in the initial phase of a PhD research project. In this case, semi-structured interviews were used for data generation framed in an interpretivist approach, situated in a business context. Research questions for this study were thoroughly defined and carefully framed in accordance with the research paradigm‟s principles, while at the same time ensuring that the requirements of both potential research methods were met. The grounded theory and phenomenology methods were compared and contrasted to determine their suitability and whether they meet the research objectives based on a pilot study. The strategy proposed in this paper is an alternative to the more „traditional‟ approach, which initially selects the methodological formulation, followed by data generation. In conclusion, the suggested strategy for delayed research method selection intends to help researchers identify and apply the most appropriate method to their research. This strategy is based on explorations of data generation and analysis in order to derive faithful results from the data generated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, three metaheuristics are proposed for solving a class of job shop, open shop, and mixed shop scheduling problems. We evaluate the performance of the proposed algorithms by means of a set of Lawrence’s benchmark instances for the job shop problem, a set of randomly generated instances for the open shop problem, and a combined job shop and open shop test data for the mixed shop problem. The computational results show that the proposed algorithms perform extremely well on all these three types of shop scheduling problems. The results also reveal that the mixed shop problem is relatively easier to solve than the job shop problem due to the fact that the scheduling procedure becomes more flexible by the inclusion of more open shop jobs in the mixed shop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The interaction and relationship between the global warming and the thermal performance buildings are dynamic in nature. In order to model and understand this behavior, different approaches, including keeping weather variable unchanged, morphing approach and diurnal modelling method, have been used to project and generate future weather data. Among these approaches, various assumptions on the change of solar radiation, air humidity and/or wind characteristics may be adopted. In this paper, an example to illustrate the generation of future weather data for the different global warming scenarios in Australia is presented. The sensitivity of building cooling loads to the possible changes of assumed values used in the future weather data generation is investigated. It is shown that with ± 10% change of the proposed future values for solar radiation, air humidity or wind characteristics, the corresponding change in the cooling load of the modeled sample office building at different Australian capital cities would not exceed 6%, 4% and 1.5% respectively. It is also found that with ±10% changes on the proposed weather variables for both the 2070-high future scenario and the current weather scenario, the corresponding change in the cooling loads at different locations may be weaker (up to 2% difference in Hobart for ±10% change in global solar radiation), similar (less than 0.6%) difference in Hobart for ±10% change in wind speed), or stronger (up to 1.6% difference in Hobart for ±10% change in relative humidity) in the 2070-high future scenario than in the current weather scenario.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses and summarises a recent systematic study on the implication of global warming on air conditioned office buildings in Australia. Four areas are covered, including analysis of historical weather data, generation of future weather data for the impact study of global warming, projection of building performance under various global warming scenarios, and evaluation of various adaptation strategies under 2070 high global warming conditions. Overall, it is found that depending on the assumed future climate scenarios and the location considered, the increase of total building energy use for the sample Australian office building may range from 0.4 to 15.1%. When the increase of annual average outdoor temperature exceeds 2 °C, the risk of overheating will increase significantly. However, the potential overheating problem could be completely eliminated if internal load density is significantly reduced.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several track-before-detection approaches for image based aircraft detection have recently been examined in an important automated aircraft collision detection application. A particularly popular approach is a two stage processing paradigm which involves: a morphological spatial filter stage (which aims to emphasize the visual characteristics of targets) followed by a temporal or track filter stage (which aims to emphasize the temporal characteristics of targets). In this paper, we proposed new spot detection techniques for this two stage processing paradigm that fuse together raw and morphological images or fuse together various different morphological images (we call these approaches morphological reinforcement). On the basis of flight test data, the proposed morphological reinforcement operations are shown to offer superior signal to-noise characteristics when compared to standard spatial filter options (such as the close-minus-open and adaptive contour morphological operations). However, system operation characterised curves, which examine detection verses false alarm characteristics after both processing stages, illustrate that system performance is very data dependent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cold-formed steel stud walls are an important component of Light Steel Framing (LSF) building systems used in commercial, industrial and residential buildings. In the conventional LSF stud wall systems, thin-walled steel studs are protected from fire by placing one or two layers of plasterboard on both sides with or without cavity insulation. However, there is very limited data about the structural and thermal performance of these wall systems while past research showed contradicting results about the benefits of cavity insulation. This research proposed a new LSF stud wall system in which a composite panel made of two plasterboards with insulation between them was used to improve the fire rating of walls. Full scale fire tests were conducted using both conventional steel stud walls with and without the use of cavity insulation and the new composite panel system. Eleven full scale load bearing wall specimens were tested to study the thermal and structural performances of the load bearing wall assemblies under standard fire conditions. These tests showed that the use of cavity insulation led to inferior fire performance of walls while also providing good explanations and supporting test data to overcome the incorrect industry assumptions about cavity insulation. Tests demonstrated that the use of external insulation in a composite panel form enhanced the thermal and structural performances of stud walls and increased their fire resistance rating significantly. This paper presents the details of the full scale fire tests of load-bearing wall assemblies lined with plasterboards and different types of insulation under varying load ratios. Test results including the temperature and deflection profiles of walls measured during the fire tests will be presented along with their failure modes and failure times.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Nurse practitioner education and practice has been guided by generic competency standards in Australia since 2006. Development of specialist competencies has been less structured and there are no formal standards to guide education and continuing professional development for specialty fields. There is limited international research and no Australian research into development of specialist nurse practitioner competencies. This pilot study aimed to test data collection methods, tools and processes in preparation for a larger national study to investigate specialist competency standards for emergency nurse practitioners. Research into specialist emergency nurse practitioner competencies has not been conducted in Australia. Methods: Mixed methods research was conducted with a sample of experienced emergency nurse practitioners. Deductive analysis of data from a focus group workshop informed development of a draft specialty competency framework. The framework was subsequently subjected to systematic scrutiny for consensus validation through a two round Delphi Study. Results: The Delphi study first round had a 100% response rate; the second round 75% response rate. The scoring for all items in both rounds was above the 80% cut off mark with the lowest mean score being 4.1 (82%) from the first round. Conclusion: The authors collaborated with emergency nurse practitioners to produce preliminary data on the formation of specialty competencies as a first step in developing an Australian framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Creative productivity emerges from human interactions (Hartley, 2009, p. 214). In an era when life is lived in rather than with media (Deuze, this issue), this productivity is widely distributed among ephemeral social networks mediated through the internet. Understanding the underlying dynamics of these networks of human interaction is an exciting and challenging task that requires us to come up with new ways of thinking and theorizing. For example, inducting theory from case studies that are designed to show the exceptional dynamics present within single settings can be augmented today by largescale data generation and collections that provide new analytic opportunities to research the diversity and complexity of human interaction. Large-scale data generation and collection is occurring across a wide range of individuals and organisations. This offers a massive field of analysis which internet companies and research labs in particular are keen on exploring. Lazer et al (2009: 721) argue that such analytic potential is transformational for many if not most research fields but that the use of such valuable data must neither remain confined to private companies and government agencies nor to a privileged set of academic researchers whose studies cannot be replicated nor critiqued. In fact, the analytic capacity to have data of such unprecedented scope and scale available not only requires us to analyse what is and could be done with it and by whom (1) but also what it is doing to us, our cultures and societies (2). Part (1) of such analysis is interested in dependencies and their implications. Part (2) of the enquiry embeds part (1) in a larger context that analyses the long-term, complex dynamics of networked human interaction. From the latter perspective we can treat specific phenomena and the methods used to analyse them as moments of evolution.