592 resultados para stylistique comparée
Resumo:
How is contemporary culture 'framed' - understood, promoted, dissected and defended - in the new approaches being employed in university education today? How do these approaches compare with those seen in the public policy process? What are the implications of these differences for future directions in theory, education, activism and policy? Framing Culture looks at cultural and media studies, which are rapidly growing fields through which students are introduced to contemporary cultural industries such as television, film and video. It compares these approaches with those used to frame public policy and finds a striking lack of correspondence between them. Issues such as Australian content on commercial television and in advertising, new technologies and new media, and violence in the media all highlight the gap between contemporary cultural theories and the way culture and communications are debated in public policy. The reasons for this gap must be investigated before closer relations can be established. Framing Culture brings together cultural studies and policy studies in a lively and innovative way. It suggests avenues for cultural activism that have been neglected in cultural theory and practice, and it will provoke debates which are long overdue.
Resumo:
Aims: Influenza is commonly spread by infectious aerosols; however, detection of viruses in aerosols is not sensitive enough to confirm the characteristics of virus aerosols. The aim of this study was to develop an assay for respiratory viruses sufficiently sensitive to be used in epidemiological studies. Method: A two-step, nested real-time PCR assay was developed for MS2 bacteriophage, and for influenza A and B, parainfluenza 1 and human respiratory syncytial virus. Outer primer pairs were designed to nest each existing real-time PCR assay. The sensitivities of the nested real-time PCR assays were compared to those of existing real-time PCR assays. Both assays were applied in an aerosol study to compare their detection limits in air samples. Conclusions: The nested real-time PCR assays were found to be several logs more sensitive than the real-time PCR assays, with lower levels of virus detected at lower Ct values. The nested real-time PCR assay successfully detected MS2 in air samples, whereas the real-time assay did not. Significance and Impact of the Study: The sensitive assays for respiratory viruses will permit further research using air samples from naturally generated virus aerosols. This will inform current knowledge regarding the risks associated with the spread of viruses through aerosol transmission.
Resumo:
The purpose of this study was to characterise the functional outcome of 12 transfemoral amputees fitted with osseointegrated fixation using temporal gait characteristics. The objectives were (A) to present the cadence, duration of gait cycle, support and swing phases with an emphasis on the stride-to-stride and participant-to-participant variability, and (B) to compare these temporal variables with normative data extracted from the literature focusing on transfemoral amputees fitted with a socket and able-bodied participants. The temporal variables were extracted from the load applied on the residuum during straight level walking, which was collected at 200 Hz by a transducer. A total of 613 strides were assessed. The cadence (46±4 strides/min), the duration of the gait cycle (1.29±0.11 s), support (0.73±0.07 s, 57±3% of CG) and swing (0.56±0.07 s, 43±3% of GC) phases of the participants were 2% quicker, 3%, 6% shorter and 1% longer than transfemoral amputees using a socket as well as 11% slower, 9%, 6% and 13% longer than able-bodied, respectively. All combined, the results indicated that the fitting of an osseointegrated fixation has enabled this group of amputees to restore their locomotion with a highly functional level. Further longitudinal and cross-sectional studies would be required to confirm these outcomes. Nonetheless, the data presented can be used as benchmark for future comparisons. It can also be used as input in generic algorithms using templates of patterns of loading to recognise activities of daily living and to detect falls.
Resumo:
The broad definition of sustainable development at the early stage of its introduction has caused confusion and hesitation among local authorities and planning professionals. The main difficulties are experience in employing loosely-defined principles of sustainable development in setting policies and goals. The question of how this theory/rhetoric-practice gap could be filled will be the theme of this study. One of the widely employed sustainability accounting approaches by governmental organisations, triple bottom line, and applicability of this approach to sustainable urban development policies will be examined. When incorporating triple bottom line considerations with the environmental impact assessment techniques, the framework of GIS-based decision support system that helps decision-makers in selecting policy option according to the economic, environmental and social impacts will be introduced. In order to embrace sustainable urban development policy considerations, the relationship between urban form, travel pattern and socio-economic attributes should be clarified. This clarification associated with other input decision support systems will picture the holistic state of the urban settings in terms of sustainability. In this study, grid-based indexing methodology will be employed to visualise the degree of compatibility of selected scenarios with the designated sustainable urban future. In addition, this tool will provide valuable knowledge about the spatial dimension of the sustainable development. It will also give fine details about the possible impacts of urban development proposals by employing disaggregated spatial data analysis (e.g. land-use, transportation, urban services, population density, pollution, etc.). The visualisation capacity of this tool will help decision makers and other stakeholders compare and select alternative of future urban developments.
Resumo:
The anisotropic pore structure and elasticity of cancellous bone cause wave speeds and attenuation in cancellous bone to vary with angle. Previously published predictions of the variation in wave speed with angle are reviewed. Predictions that allow tortuosity to be angle dependent but assume isotropic elasticity compare well with available data on wave speeds at large angles but less well for small angles near the normal to the trabeculae. Claims for predictions that only include angle-dependence in elasticity are found to be misleading. Audio-frequency data obtained at audio-frequencies in air-filled bone replicas are used to derive an empirical expression for the angle-and porosity-dependence of tortuosity. Predictions that allow for either angle dependent tortuosity or angle dependent elasticity or both are compared with existing data for all angles and porosities.
Resumo:
Background: The incidence of obesity is increasing; this is of major concern, as obesity is associated with cardiovascular disease, stroke, type 2 diabetes, respiratory tract disease, and cancer. Objectives/methods: This evaluation is of a Phase II clinical trial with tesofensine in obese subjects. Results: After 26 weeks, tesofensine caused a significant weight loss, and may have a higher maximal ability to reduce weight than the presently available anti-obesity agents. However, tesofensine also increased blood pressure and heart rate, and may increase psychiatric disorders. Conclusions: It is encouraging that tesofensine 0.5 mg may cause almost double the weight loss observed with sibutramine or rimonabant. As tesofensine and sibutramine have similar pharmacological profiles, it would be of interest to compare the weight loss with tesofensine in a head-to-head clinical trial with sibutramine, to properly assess their comparative potency. Also, as teso fensine 0.5 mg increases heart rate, as well as increasing the incidence of adverse effects such as nausea, drug mouth, flatulence, insomnia, and depressed mode, its tolerability needs to be further evaluated in large Phase III clinical trials.
Resumo:
Background Patella resurfacing in total knee arthroplasty is a contentious issue. The literature suggests that resurfacing of the patella is based on surgeon preference, and little is known about the role and timing of resurfacing and how this affects outcomes. Methods We analyzed 134,799 total knee arthroplasties using data from the Australian Orthopaedic Association National Joint Replacement Registry. Hazards ratios (HRs) were used to compare rates of early revision between patella resurfacing at the primary procedure (the resurfacing group, R) and primary arthroplasty without resurfacing (no-resurfacing group, NR). We also analyzed the outcomes of NR that were revised for isolated patella addition. Results At 5 years, the R group showed a lower revision rate than the NR group: cumulative per cent revision (CPR) 3.1% and 4.0%, respectively (HR = 0.75, p < 0.001). Revisions for patellofemoral pain were more common in the NR group (17%) than in the R group (1%), and “patella only” revisions were more common in the NR group (29%) than in the R group (6%). Non-resurfaced knees revised for isolated patella addition had a higher revision rate than patella resurfacing at the primary procedure, with a 4-year CPR of 15% and 2.8%, respectively (HR = 4.1, p < 0.001). Interpretation Rates of early revision of primary total knees were higher when the patella was not resurfaced, and suggest that surgeons may be inclined to resurface later if there is patellofemoral pain. However, 15% of non-resurfaced knees revised for patella addition are re-revised by 4 years. Our results suggest an early beneficial outcome for patella resurfacing at primary arthroplasty based on revision rates up to 5 years.
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
Information System (IS) success may be the most arguable and important dependent variable in the IS field. The purpose of the present study is to address IS success by empirically assess and compare DeLone and McLean’s (1992) and Gable’s et al. (2008) models of IS success in Australian Universities context. The two models have some commonalities and several important distinctions. Both models integrate and interrelate multiple dimensions of IS success. Hence, it would be useful to compare the models to see which is superior; as it is not clear how IS researchers should respond to this controversy.
Resumo:
Economists rely heavily on self-reported measures to examine the relationship between income and health. We directly compare survey responses of a self-reported measure of health that is commonly used in nationally representative surveys with objective measures of the same health condition. We focus on hypertension. We find no evidence of an income/health greadient using self-reported hypertension but a sizeable gradient when using objectively measured hypertension. We also find that the probability of a false negative reporting is significantly income graded. Our results suggest that using commonly available self-reported chronic health measures might underestimate true income-related inequalities in health.
Resumo:
Introduction: Some types of antimicrobial-coated central venous catheters (A-CVC) have been shown to be cost-effective in preventing catheter-related bloodstream infection (CR-BSI). However, not all types have been evaluated, and there are concerns over the quality and usefulness of these earlier studies. There is uncertainty amongst clinicians over which, if any, antimicrobial-coated central venous catheters to use. We re-evaluated the cost-effectiveness of all commercially available antimicrobialcoated central venous catheters for prevention of catheter-related bloodstream infection in adult intensive care unit (ICU) patients. Methods: We used a Markov decision model to compare the cost-effectiveness of antimicrobial-coated central venous catheters relative to uncoated catheters. Four catheter types were evaluated; minocycline and rifampicin (MR)-coated catheters; silver, platinum and carbon (SPC)-impregnated catheters; and two chlorhexidine and silver sulfadiazine-coated catheters, one coated on the external surface (CH/SSD (ext)) and the other coated on both surfaces (CH/SSD (int/ext)). The incremental cost per qualityadjusted life-year gained and the expected net monetary benefits were estimated for each. Uncertainty arising from data estimates, data quality and heterogeneity was explored in sensitivity analyses. Results: The baseline analysis, with no consideration of uncertainty, indicated all four types of antimicrobial-coated central venous catheters were cost-saving relative to uncoated catheters. Minocycline and rifampicin-coated catheters prevented 15 infections per 1,000 catheters and generated the greatest health benefits, 1.6 quality-adjusted life-years, and cost-savings, AUD $130,289. After considering uncertainty in the current evidence, the minocycline and rifampicin-coated catheters returned the highest incremental monetary net benefits of $948 per catheter; but there was a 62% probability of error in this conclusion. Although the minocycline and rifampicin-coated catheters had the highest monetary net benefits across multiple scenarios, the decision was always associated with high uncertainty. Conclusions: Current evidence suggests that the cost-effectiveness of using antimicrobial-coated central venous catheters within the ICU is highly uncertain. Policies to prevent catheter-related bloodstream infection amongst ICU patients should consider the cost-effectiveness of competing interventions in the light of this uncertainty. Decision makers would do well to consider the current gaps in knowledge and the complexity of producing good quality evidence in this area.
Resumo:
The study reported here, constitutes a full review of the major geological events that have influenced the morphological development of the southeast Queensland region. Most importantly, it provides evidence that the region’s physiography continues to be geologically ‘active’ and although earthquakes are presently few and of low magnitude, many past events and tectonic regimes continue to be strongly influential over drainage, morphology and topography. Southeast Queensland is typified by highland terrain of metasedimentary and igneous rocks that are parallel and close to younger, lowland coastal terrain. The region is currently situated in a passive margin tectonic setting that is now under compressive stress, although in the past, the region was subject to alternating extensional and compressive regimes. As part of the investigation, the effects of many past geological events upon landscape morphology have been assessed at multiple scales using features such as the location and orientation of drainage channels, topography, faults, fractures, scarps, cleavage, volcanic centres and deposits, and recent earthquake activity. A number of hypotheses for local geological evolution are proposed and discussed. This study has also utilised a geographic information system (GIS) approach that successfully amalgamates the various types and scales of datasets used. A new method of stream ordination has been developed and is used to compare the orientation of channels of similar orders with rock fabric, in a topologically controlled approach that other ordering systems are unable to achieve. Stream pattern analysis has been performed and the results provide evidence that many drainage systems in southeast Queensland are controlled by known geological structures and by past geological events. The results conclude that drainage at a fine scale is controlled by cleavage, joints and faults, and at a broader scale, large river valleys, such as those of the Brisbane River and North Pine River, closely follow the location of faults. These rivers appear to have become entrenched by differential weathering along these planes of weakness. Significantly, stream pattern analysis has also identified some ‘anomalous’ drainage that suggests the orientations of these watercourses are geologically controlled, but by unknown causes. To the north of Brisbane, a ‘coastal drainage divide’ has been recognized and is described here. The divide crosses several lithological units of different age, continues parallel to the coast and prevents drainage from the highlands flowing directly to the coast for its entire length. Diversion of low order streams away from the divide may be evidence that a more recent process may be the driving force. Although there is no conclusive evidence for this at present, it is postulated that the divide may have been generated by uplift or doming associated with mid-Cenozoic volcanism or a blind thrust at depth. Also north of Brisbane, on the D’Aguilar Range, an elevated valley (the ‘Kilcoy Gap’) has been identified that may have once drained towards the coast and now displays reversed drainage that may have resulted from uplift along the coastal drainage divide and of the D’Aguilar blocks. An assessment of the distribution and intensity of recent earthquakes in the region indicates that activity may be associated with ancient faults. However, recent movement on these faults during these events would have been unlikely, given that earthquakes in the region are characteristically of low magnitude. There is, however, evidence that compressive stress is building and being released periodically and ancient faults may be a likely place for this stress to be released. The relationship between ancient fault systems and the Tweed Shield Volcano has also been discussed and it is suggested here that the volcanic activity was associated with renewed faulting on the Great Moreton Fault System during the Cenozoic. The geomorphology and drainage patterns of southeast Queensland have been compared with expected morphological characteristics found at passive and other tectonic settings, both in Australia and globally. Of note are the comparisons with the East Brazilian Highlands, the Gulf of Mexico and the Blue Ridge Escarpment, for example. In conclusion, the results of the study clearly show that, although the region is described as a passive margin, its complex, past geological history and present compressive stress regime provide a more intricate and varied landscape than would be expected along typical passive continental margins. The literature review provides background to the subject and discusses previous work and methods, whilst the findings are presented in three peer-reviewed, published papers. The methods, hypotheses, suggestions and evidence are discussed at length in the final chapter.
Resumo:
Definition of disease phenotype is a necessary preliminary to research into genetic causes of a complex disease. Clinical diagnosis of migraine is currently based on diagnostic criteria developed by the International Headache Society. Previously, we examined the natural clustering of these diagnostic symptoms using latent class analysis (LCA) and found that a four-class model was preferred. However, the classes can be ordered such that all symptoms progressively intensify, suggesting that a single continuous variable representing disease severity may provide a better model. Here, we compare two models: item response theory and LCA, each constructed within a Bayesian context. A deviance information criterion is used to assess model fit. We phenotyped our population sample using these models, estimated heritability and conducted genome-wide linkage analysis using Merlin-qtl. LCA with four classes was again preferred. After transformation, phenotypic trait values derived from both models are highly correlated (correlation = 0.99) and consequently results from subsequent genetic analyses were similar. Heritability was estimated at 0.37, while multipoint linkage analysis produced genome-wide significant linkage to chromosome 7q31-q33 and suggestive linkage to chromosomes 1 and 2. We argue that such continuous measures are a powerful tool for identifying genes contributing to migraine susceptibility.
Resumo:
In this paper, we classify, review, and experimentally compare major methods that are exploited in the definition, adoption, and utilization of element similarity measures in the context of XML schema matching. We aim at presenting a unified view which is useful when developing a new element similarity measure, when implementing an XML schema matching component, when using an XML schema matching system, and when comparing XML schema matching systems.