933 resultados para Packet Reservation Multiple Access (PRMA)
Resumo:
Mapping Multiple Literacies brings together the latest theory and research in the fields of literacy study and European philosophy, Multiple Literacies Theory (MLT) and the philosophical work of Gilles Deleuze. It frames the process of becoming literate as a fluid process involving multiple modes of presentation, and explains these processes in terms of making maps of our social lives and ways of doing things together. For Deleuze, language acquisition is a social activity of which we are a part, but only one part amongst many others. Masny and Cole draw on Deleuze's thinking to expand the repertoires of literacy research and understanding. They outline how we can understand literacy as a social activity and map the ways in which becoming literate may take hold and transform communities. The chapters in this book weave together theory, data and practice to open up a creative new area of literacy studies and to provoke vigorous debate about the sociology of literacy.
Resumo:
This thematic issue on education and the politics of becoming focuses on how a Multiple Literacies Theory (MLT) plugs into practice in education. MLT does this by creating an assemblage between discourse, text, resonance and sensations. What does this produce? Becoming AND how one might live are the product of an assemblage (May, 2005; Semetsky, 2003). In this paper, MLT is the approach that explores the connection between educational theory and practice through the lens of an empirical study of multilingual children acquiring multiple writing systems simultaneously. The introduction explicates discourse, text, resonance, sensation and becoming. The second section introduces certain Deleuzian concepts that plug into MLT. The third section serves as an introduction to MLT. The fourth section is devoted to the study by way of a rhizoanalysis. Finally, drawing on the concept of the rhizome, this article exits with potential lines of flight opened by MLT. These are becomings which highlight the significance of this work in terms of transforming not only how literacies are conceptualized, especially in minority language contexts, but also how one might live.
Resumo:
There is currently little information available about reasons for contraceptive use or non-use among young Australian women and the reasons for choosing specific types of contraceptive methods. A comprehensive life course perspective of women's experiences in using and obtaining contraceptives is lacking, particularly relating to women's perceived or physical barriers to access. This paper presents an analysis of qualitative data gathered from free-text comments provided by women born between 1973 and 1978 as part of their participation in the Australian Longitudinal Study on Women's Health. The Australian Longitudinal Study on Women's Health is a large cohort study involving over 40,000 women from three age groups (aged 18-23, aged 40-45 and aged 70-75) who were selected from the database of Medicare the Australian universal health insurance system in 1995. The women have been surveyed every 3 years about their health by mailed self-report surveys, and more recently online. Written comments from 690 women across five surveys from 1996 (when they were aged 18-23 years) to 2009 (aged 31-36 years) were examined. Factors relating to contraceptive use and barriers to access were identified and explored using thematic analysis. Side-effects, method satisfaction, family timing, and hormonal balance were relevant to young women using contraception. Most women who commented about a specific contraceptive method wrote about the oral contraceptive pill. While many women were positive or neutral about their method, noting its convenience or non-contraceptive benefits, many others were concerned about adverse effects, affordability, method failure, and lack of choice. Negative experiences with health services, lack of information, and cost were identified as barriers to access. As the cohort aged over time, method choice, changing patterns of use, side-effects, and negative experiences with health services remained important themes. Side-effects, convenience, and family timing play important roles in young Australian women's experiences of contraception and barriers to access. Contrary to assumptions, barriers to contraceptive access continue to be experienced by young women as they move into adulthood. Further research is needed about how to decrease barriers to contraceptive use and minimise negative experiences in order to ensure optimal contraceptive access for Australian women.
Resumo:
Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.
Resumo:
A procedure for the evaluation of multiple scattering contributions is described, for deep inelastic neutron scattering (DINS) studies using an inverse geometry time-of-flight spectrometer. The accuracy of a Monte Carlo code DINSMS, used to calculate the multiple scattering, is tested by comparison with analytic expressions and with experimental data collected from polythene, polycrystalline graphite and tin samples. It is shown that the Monte Carlo code gives an accurate representation of the measured data and can therefore be used to reliably correct DINS data.
Resumo:
Welcome to the Quality assessment matrix. This matrix is designed for highly qualified discipline experts to evaluate their course, major or unit in a systematic manner. The primary purpose of the Quality assessment matrix is to provide a tool that a group of academic staff at universities can collaboratively review the assessment within a course, major or unit annually. The annual review will result in you being read for an external curricula review at any point in time. This tool is designed for use in a workshop format with one, two or more academic staff, and will lead to an action plan for implementation.
Resumo:
Review of the book 'Access to East European and Eurasian culture: publishing, acquisitions, digitization, metadata', edited by Miranda Remnek, published by Haworth Information Press, 2009.
Resumo:
BACKGROUND: The treatment for deep surgical site infection (SSI) following primary total hip arthroplasty (THA) varies internationally and it is at present unclear which treatment approaches are used in Australia. The aim of this study is to identify current treatment approaches in Queensland, Australia, show success rates and quantify the costs of different treatments. METHODS: Data for patients undergoing primary THA and treatment for infection between January 2006 and December 2009 in Queensland hospitals were extracted from routinely used hospital databases. Records were linked with pathology information to confirm positive organisms. Diagnosis and treatment of infection was determined using ICD-10-AM and ACHI codes, respectively. Treatment costs were estimated based on AR-DRG cost accounting codes assigned to each patient hospital episode. RESULTS: A total of n=114 patients with deep surgical site infection were identified. The majority of patients (74%) were first treated with debridement, antibiotics and implant retention (DAIR), which was successful in eradicating the infection in 60.3% of patients with an average cost of $13,187. The remaining first treatments were 1-stage revision, successful in 89.7% with average costs of $27,006, and 2-stage revisions, successful in 92.9% of cases with average costs of $42,772. Multiple treatments following 'failed DAIR' cost on average $29,560, for failed 1-stage revision were $24,357, for failed 2-stage revision were $70,381 and were $23,805 for excision arthroplasty. CONCLUSIONS: As treatment costs in Australia are high primary prevention is important and the economics of competing treatment choices should be carefully considered. These currently vary greatly across international settings.
Resumo:
The recent Australian Convergence Review’s second principle states: “Australians should have access to and opportunities for participation in a diverse mix of services, voices, views and information”. However, in failing to define its own use and understanding of the terms ‘access’ and ‘participation’ the Convergence Review exposes itself to criticism. These terms would no doubt be made unambiguously clear should the Review’s recommendations move towards policy, and this paper contributes to this discussion by framing access and participation, from the perspective of the ‘produser’ (Bruns, 2008), around three separate but related issues: the failure to frame the discussion that will be undertaken by the Australian Law Reform Commission’s 2012 2013 Copyright Inquiry; the prioritising of the market over and above media accountability and the health of the public sphere; and the missed opportunity to develop a national framework for digital literacy and advanced digital citizenry.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
We introduce the use of Ingenuity Pathway Analysis to analyzing global metabonomics in order to characterize phenotypically biochemical perturbations and the potential mechanisms of the gentamicin-induced toxicity in multiple organs. A single dose of gentamicin was administered to Sprague Dawley rats (200 mg/kg, n = 6) and urine samples were collected at -24-0 h pre-dosage, 0-24, 24-48, 48-72 and 72-96 h post-dosage of gentamicin. The urine metabonomics analysis was performed by UPLC/MS, and the mass spectra signals of the detected metabolites were systematically deconvoluted and analyzed by pattern recognition analyses (Heatmap, PCA and PLS-DA), revealing a time-dependency of the biochemical perturbations induced by gentamicin toxicity. As result, the holistic metabolome change induced by gentamicin toxicity in the animal's organisms was characterized. Several metabolites involved in amino acid metabolism were identified in urine, and it was confirmed that gentamicin biochemical perturbations can be foreseen from these biomarkers. Notoriously, it was found that gentamicin induced toxicity in multiple organs system in the laboratory rats. The proof-of-knowledge based Ingenuity Pathway Analysis revealed gentamicin induced liver and heart toxicity, along with the previously known toxicity in kidney. The metabolites creatine, nicotinic acid, prostaglandin E2, and cholic acid were identified and validated as phenotypic biomarkers of gentamicin induced toxicity. Altogether, the significance of the use of metabonomics analyses in the assessment of drug toxicity is highlighted once more; furthermore, this work demonstrated the powerful predictive potential of the Ingenuity Pathway Analysis to study of drug toxicity and its valuable complementation for metabonomics based assessment of the drug toxicity.