733 resultados para work sampling
Resumo:
Numerous studies have pointed to the fact that journalism in most industrialised societies is undergoing a particularly intensive period of transformation. Yet, while many scholars have studied how news organisations are changing, comparatively fewer studies have inquired into how journalists themselves are experiencing the changes in their work brought on by the technological, economic and cultural transformations. Based on a representative study of Australian journalists, this paper reports on their perceptions of changes in a variety of influences and aspects of their work over the past five years. It finds that journalists say change has been most notable in audience interactions and technological innovation, while economic changes are somewhat less strong. Importantly, they are also very concerned about an increase in sensationalism and a drop in journalistic standards and the credibility of journalism. Results are also compared across different organisational contexts.
Resumo:
The news increasingly provides help, advice, guidance, and information about the management of self and everyday life, in addition to its traditional role in political communication. Yet such forms of journalism are still regularly denigrated in scholarly discussions, as they often deviate from normative ideals. This is particularly true in lifestyle journalism, where few studies have examined the impact of commercial influences. Through in-depth interviews with 89 Australian and German lifestyle journalists, this paper explores the ways in which journalists experience how the lifestyle industries try to shape their daily work, and how these journalists deal with these influences. We find that lifestyle journalists are in a constant struggle over the control of editorial content, and their responses to increasing commercial pressures vary between resistance and resignation. This has implications for our understanding of journalism as a whole in that it broadens it beyond traditional conceptualizations associated with political journalism.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
In treatment comparison experiments, the treatment responses are often correlated with some concomitant variables which can be measured before or at the beginning of the experiments. In this article, we propose schemes for the assignment of experimental units that may greatly improve the efficiency of the comparison in such situations. The proposed schemes are based on general ranked set sampling. The relative efficiency and cost-effectiveness of the proposed schemes are studied and compared. It is found that some proposed schemes are always more efficient than the traditional simple random assignment scheme when the total cost is the same. Numerical studies show promising results using the proposed schemes.
Resumo:
Mapping and evaluating a student's progress on placement is a core element of social work education but there has been scant attention to indicate how to effectively create and assess student learning and performance. This paper outlines a project undertaken by the Combined Schools of Social Work to develop a common learning and assessment tool that is being used by all social work schools in Victoria. The paper describes how the Common Assessment Tool (CAT) was developed, drawing on the Australian Association of Social Work Practice Standards, leading to seven key learning areas that form the basis of the assessment of a student's readiness for practice. An evaluation of the usefulness of the CAT was completed by field educators, liaison staff, and students, which confirmed that the CAT was a useful framework for evaluating students' learning goals. The feedback also identified a number of problematic features that were addressed in a revised CAT and rating scale.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
A new technique called the reef resource inventory (RRI) was developed to map the distribution and abundance of benthos and substratum on reefs. The rapid field sampling technique uses divers to visually estimate the percentage cover of categories of benthos and substratum along 2x20 in plotless strip-transects positioned randomly over the tops, and systematically along the edge of reefs. The purpose of this study was to compare the relative sampling accuracy of the RRI against the line intercept transect technique (LIT), an international standard for sampling reef benthos and substratum. Analysis of paired sampling with LIT and RRI at 51 sites indicated sampling accuracy was not different (P > 0.05) for 8 of the 12 benthos and substratum categories used in the study. Significant differences were attributed to small-scale patchiness and cryptic coloration of some benthos; effects associated with sampling a sparsely distributed animal along a line versus an area; difficulties in discriminating some of the benthos and substratum categories; and differences due to visual acuity since LIT measurements were taken by divers close to the seabed whereas RRI measurements were taken by divers higher in the water column. The relative cost efficiency of the RRI technique was at least three times that of LIT for all benthos and substratum categories and as much as 10 times higher for two categories. These results suggest that the RRI can be used to obtain reliable and accurate estimates of relative abundance of broad categories of reef benthos and substratum.
Resumo:
This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.
Resumo:
The efficiency with which a small beam trawl (1 x 0.5 m mouth) sampled postlarvae and juveniles of tiger prawns Penaeus esculentus and P, semisulcatus at night was estimated in 3 tropical seagrass communities (dominated by Thalassia hemprichii, Syringodium isoetifolium and Enhalus acoroides, respectively) in the shallow waters of the Gulf of Carpentaria in northern Australia. An area of seagrass (40 x 3 m) was enclosed by a net and the beam trawl was repeatedly hand-hauled over the substrate. Net efficiency (q) was calculated using 4 methods: the unweighted Leslie, weighted Leslie, DeLury and Maximum-likelihood (ML) methods. The Maximum-likelihood is the preferred method for estimating efficiency because it makes the fewest assumptions and is not affected by zero catches. The major difference in net efficiencies was between postlarvae (mean ML q +/- 95% confidence limits = 0.66 +/- 0.16) and juveniles of both species (mean q for juveniles in water less than or equal to 1.0 m deep = 0.47 +/- 0.05), i.e. the beam trawl was more efficient at capturing postlarvae than juveniles. There was little difference in net efficiency for P, esculentus between seagrass types (T, hemprichii versus S. isoetifolium), even though the biomass and morphologies of seagrass in these communities differed greatly (biomasses were 54 and 204 g m(-2), respectively). The efficiency of the net appeared to be the same for juveniles of the 2 species in shallow water, but was lower for juvenile P, semisulcatus at high tide when the water was deeper (1.6 to 1.9 m) (0.35 +/- 0.08). The lower efficiency near the time of high tide is possibly because the prawns are more active at high than low tide, and can also escape above the net. Factors affecting net efficiency and alternative methods of estimating net efficiency are discussed.
Resumo:
Traditional comparisons between the capture efficiency of sampling devices have generally looked at the absolute differences between devices. We recommend that the signal-to-noise ratio be used when comparing the capture efficiency of benthic sampling devices. Using the signal-to-noise ratio rather than the absolute difference has the advantages that the variance is taken into account when determining how important the difference is, the hypothesis and minimum detectable difference can be made identical for all taxa, it is independent of the units used for measurement, and the sample-size calculation is independent of the variance. This new technique is illustrated by comparing the capture efficiency of a 0.05 m(2) van Veen grab and an airlift suction device, using samples taken from Heron and One Tree lagoons, Australia.
Resumo:
This thesis evaluates a chronic condition self-management program for Aboriginal and Torres Strait Islander people in urban south-east Queensland who have or are at risk of cardiovascular disease. Outcomes showed short-term improvements for some anthropometry measures which could be a trend for improvement in other anthropometry indicators over the longer term. The program was of particular benefit for participants who had several social and emotional wellbeing conditions. The use of an Aboriginal and Torres Strait Islander conceptual framework was critical in undertaking culturally competent quantitative research in this project.
Resumo:
The current study explored the perceptions of direct care staff working in Australian residential aged care facilities (RACFs) regarding the organizational barriers that they believe prevent them from facilitating decision making for individuals with dementia. Normalization process theory (NPT) was used to interpret the findings to understand these barriers in a broader context. The qualitative study involved semi-structured interviews (N = 41) and focus groups (N = 8) with 80 direct care staff members of all levels working in Australian RACFs. Data collection and analysis were conducted in parallel and followed a systematic, inductive approach in line with grounded theory. The perceptions of participants regarding the organizational barriers to facilitating decision making for individuals with dementia can be described by the core category, Working Within the System, and three sub-themes: (a) finding time, (b) competing rights, and (c)not knowing. Examining the views of direct care staff through the lens of NPT allows possible areas for improvement to be identified at an organizational level and the perceived barriers to be understood in the context of promoting normalization of decision making for individuals with dementia.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.