159 resultados para spermatozoon count
Resumo:
Measuring the business value that Internet technologies deliver for organisations has proven to be a difficult and elusive task, given their complexity and increased embeddedness within the value chain. Yet, despite the lack of empirical evidence that links the adoption of Information Technology (IT) with increased financial performance, many organisations continue to adopt new technologies at a rapid rate. This is evident in the widespread adoption of Web 2.0 online Social Networking Services (SNSs) such as Facebook, Twitter and YouTube. These new Internet based technologies, widely used for social purposes, are being employed by organisations to enhance their business communication processes. However, their use is yet to be correlated with an increase in business performance. Owing to the conflicting empirical evidence that links prior IT applications with increased business performance, IT, Information Systems (IS), and E-Business Model (EBM) research has increasingly looked to broader social and environmental factors as a means for examining and understanding the broader influences shaping IT, IS and E-Business (EB) adoption behaviour. Findings from these studies suggest that organisations adopt new technologies as a result of strong external pressures, rather than a clear measure of enhanced business value. In order to ascertain if this is the case with the adoption of SNSs, this study explores how organisations are creating value (and measuring that value) with the use of SNSs for business purposes, and the external pressures influencing their adoption. In doing so, it seeks to address two research questions: 1. What are the external pressures influencing organisations to adopt SNSs for business communication purposes? 2. Are SNSs providing increased business value for organisations, and if so, how is that value being captured and measured? Informed by the background literature fields of IT, IS, EBM, and Web 2.0, a three-tiered theoretical framework is developed that combines macro-societal, social and technological perspectives as possible causal mechanisms influencing the SNS adoption event. The macro societal view draws on the concept of Castells. (1996) network society and the behaviour of crowds, herds and swarms, to formulate a new explanatory concept of the network vortex. The social perspective draws on key components of institutional theory (DiMaggio & Powell, 1983, 1991), and the technical view draws from the organising vision concept developed by Swanson and Ramiller (1997). The study takes a critical realist approach, and conducts four stages of data collection and one stage of data coding and analysis. Stage 1 consisted of content analysis of websites and SNSs of many organisations, to identify the types of business purposes SNSs are being used for. Stage 2 also involved content analysis of organisational websites, in order to identify suitable sample organisations in which to conduct telephone interviews. Stage 3 consisted of conducting 18 in-depth, semi-structured telephone interviews within eight Australian organisations from the Media/Publishing and Galleries, Libraries, Archives and Museum (GLAM) industries. These sample organisations were considered leaders in the use of SNSs technologies. Stage 4 involved an SNS activity count of the organisations interviewed in Stage 3, in order to rate them as either Advanced Innovator (AI) organisations, or Learning Focussed (LF) organisations. A fifth stage of data coding and analysis of all four data collection stages was conducted, based on the theoretical framework developed for the study, and using QSR NVivo 8 software. The findings from this study reveal that SNSs have been adopted by organisations for the purpose of increasing business value, and as a result of strong social and macro-societal pressures. SNSs offer organisations a wide range of value enhancing opportunities that have broader benefits for customers and society. However, measuring the increased business value is difficult with traditional Return On Investment (ROI) mechanisms, ascertaining the need for new value capture and measurement rationales, to support the accountability of SNS adoption practices. The study also identified the presence of technical, social and macro-societal pressures, all of which influenced SNS adoption by organisations. These findings contribute important theoretical insight into the increased complexity of pressures influencing technology adoption rationales by organisations, and have important practical implications for practice, by reflecting the expanded global online networks in which organisations now operate. The limitations of the study include the small number of sample organisations in which interviews were conducted, its limited generalisability, and the small range of SNSs selected for the study. However, these were compensated in part by the expertise of the interviewees, and the global significance of the SNSs that were chosen. Future research could replicate the study to a larger sample from different industries, sectors and countries. It could also explore the life cycle of SNSs in a longitudinal study, and map how the technical, social and macro-societal pressures are emphasised through stages of the life cycle. The theoretical framework could also be applied to other social fad technology adoption studies.
Resumo:
Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.
Resumo:
Iron (Fe) is the fourth most abundant element in the Earth’s crust. Excess Fe mobilization from terrestrial into aquatic systems is of concern for deterioration of water quality via biofouling and nuisance algal blooms in coastal and marine systems. Substantial Fe dissolution and transport involve alternate Fe(II) oxidation followed by Fe(III) reduction, with a diversity of Bacteria and Archaea acting as the key catalyst. Microbially-mediated Fe cycling is of global significance with regard to cycles of carbon (C), sulfur (S) and manganese (Mn). However, knowledge regarding microbial Fe cycling in circumneutral-pH habitats that prevail on Earth has been lacking until recently. In particular, little is known regarding microbial function in Fe cycling and associated Fe mobilization and greenhouse (CO2 and CH4, GHG) evolution in subtropical Australian coastal systems where microbial response to ambient variations such as seasonal flooding and land use changes is of concern. Using the plantation-forested Poona Creek catchment on the Fraser Coast of Southeast Queensland (SEQ), this research aimed to 1) study Fe cycling-associated bacterial populations in diverse terrestrial and aquatic habitats of a representative subtropical coastal circumneutral-pH (4–7) ecosystem; and 2) assess potential impacts of Pinus plantation forestry practices on microbially-mediated Fe mobilization, organic C mineralization and associated GHG evolution in coastal SEQ. A combination of wet-chemical extraction, undisturbed core microcosm, laboratory bacterial cultivation, microscopy and 16S rRNA-based molecular phylogenetic techniques were employed. The study area consisted primarily of loamy sands, with low organic C and dissolved nutrients. Total reactive Fe was abundant and evenly distributed within soil 0–30 cm profiles. Organic complexation primarily controlled Fe bioavailability and forms in well-drained plantation soils and water-logged, native riparian soils, whereas tidal flushing exerted a strong “seawater effect” in estuarine locations and formed a large proportion of inorganic Fe(III) complexes. There was a lack of Fe(II) sources across the catchment terrestrial system. Mature, first-rotation plantation clear-felling and second-rotation replanting significantly decreased organic matter and poorly crystalline Fe in well-drained soils, although variations in labile soil organic C fractions (dissolved organic C, DOC; and microbial biomass C, MBC) were minor. Both well-drained plantation soils and water-logged, native-vegetation soils were inhabited by a variety of cultivable, chemotrophic bacterial populations capable of C, Fe, S and Mn metabolism via lithotrophic or heterotrophic, (micro)aerobic or anaerobic pathways. Neutrophilic Fe(III)-reducing bacteria (FeRB) were most abundant, followed by aerobic, heterotrophic bacteria (heterotrophic plate count, HPC). Despite an abundance of FeRB, cultivable Fe(II)-oxidizing bacteria (FeOB) were absent in associated soils. A lack of links between cultivable Fe, S or Mn bacterial densities and relevant chemical measurements (except for HPC correlated with DOC) was likely due to complex biogeochemical interactions. Neither did variations in cultivable bacterial densities correlate with plantation forestry practices, despite total cultivable bacterial densities being significantly lower in estuarine soils when compared with well-drained plantation soils and water-logged, riparian native-vegetation soils. Given that bacterial Fe(III) reduction is the primary mechanism of Fe oxide dissolution in soils upon saturation, associated Fe mobilization involved several abiotic and biological processes. Abiotic oxidation of dissolved Fe(II) by Mn appeared to control Fe transport and inhibit Fe dissolution from mature, first-rotation plantation soils post-saturation. Such an effect was not observed in clear-felled and replanted soils associated with low SOM and potentially low Mn reactivity. Associated GHG evolution post-saturation mainly involved variable CO2 emissions, with low, but consistently increasing CH4 effluxes in mature, first-rotation plantation soil only. In comparison, water-logged soils in the riparian native-vegetation buffer zone functioned as an important GHG source, with high potentials for Fe mobilization and GHG, particularly CH4 emissions in riparian loam soils associated with high clay and crystalline Fe fractions. Active Fe–C cycling was unlikely to occur in lower-catchment estuarine soils associated with low cultivable bacterial densities and GHG effluxes. As a key component of bacterial Fe cycling, neutrophilic FeOB widely occurred in diverse aquatic, but not terrestrial, habitats of the catchment study area. Stalked and sheathed FeOB resembling Gallionella and Leptothrix were limited to microbial mat material deposited in surface fresh waters associated with a circumneutral-pH seep, and clay-rich soil within riparian buffer zones. Unicellular, Sideroxydans-related FeOB (96% sequence identity) were ubiquitous in surface and subsurface freshwater environments, with highest abundance in estuary-adjacent shallow coastal groundwater water associated with redox transition. The abundance of dissolved C and Fe in the groundwater-dependent system was associated with high numbers of cultivable anaerobic, heterotrophic FeRB, microaerophilic, putatively lithotrophic FeOB and aerobic, heterotrophic bacteria. This research represents the first study of microbial Fe cycling in diverse circumneutral-pH environments (terrestrial–aquatic, freshwater–estuarine, surface–subsurface) of a subtropical coastal ecosystem. It also represents the first study of its kind in the southern hemisphere. This work highlights the significance of bacterial Fe(III) reduction in terrestrial, and bacterial Fe(II) oxidation in aquatic catchment Fe cycling. Results indicate the risk of promotion of Fe mobilization due to plantation clear-felling and replanting, and GHG emissions associated with seasonal water-logging. Additional significant outcomes were also achieved. The first direct evidence for multiple biomineralization patterns of neutrophilic, microaerophilic, unicellular FeOB was presented. A putatively pure culture, which represents the first cultivable neutrophilic FeOB from the southern hemisphere, was obtained as representative FeOB ubiquitous in diverse catchment aquatic habitats.
Resumo:
Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.
Resumo:
Journeys with Friends Truna aka J. Turner, Giselle Rosman and Matt Ditton Panel Session description: We are no longer an industry (alone) we are a sector. Where the model once consisted of industry making games, we now see the rise of a cultural sector playing in the game space – industry, indies (for whatever that distinction implies) artists (another odd distinction), individuals and well … everyone and their mums. This evolution has an affect – on audiences and who they are, what they expect and want, and how they understand the purpose and language of these “digital game forms’; how we talk about our worlds and the kinds of issues that are raised; on what we create and how we create it and on our communities and who we are. This evolution has an affect on how these works are understood within the wider social context and how we present this understanding to the next generation of makers and players. We can see the potential of this evolution from industry to sector in the rise of the Australian indie. We can see the potential fractures created by this evolution in the new voices that ask questions about diversity and social justice. And yet, we still see a ‘solution’ type reaction to the current changing state of our sector which announces the monolithic, Fordist model as desirable (albeit in smaller form) – with the subsequent ramifications for ‘training’ and production of local talent. Experts talk about a mismatch of graduate skills and industry needs, insufficient linkages between industry and education providers and the need to explore opportunity for the now passing model in new spaces such as adver-games and serious games. Head counts of Australian industry don’t recognise trans media producers as being part of their purview or opportunity, they don’t count the rise of the cultural playful game inspired creative works as one of thier team. Such perspectives are indeed relevant to the Australian Games Industry, but what about the emerging Australian Games Sector? How do we enable a future in such a space? This emerging sector is perhaps best represented by Melbourne’s Freeplay audience: a heady mix of indie developers, players, artists, critical thinkers and industry. Such audiences are no longer content with an ‘industry’ alone; they are the community who already see themselves as an important, vibrant cultural sector. Part of the discussion presented here seeks to identify and understand the resources, primarily in the context of community and educational opportunities, available to the evolving sector now relying more on the creative processes. This creative process and community building is already visibly growing within the context of smaller development studios, often involving more multiskilling production methodologies where the definition of ‘game’ clearly evolves beyond the traditional one.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.
Resumo:
The case proposes an ethical dilemma that a Public Service Director faces that could affect his career, the career of his boss, and the career of the governor of a state. There is a strong need for ethical leaders in this changing global organization world where the headlines are filled with stories of private sector and public sector leaders who have made serious ethical and moral compromises. It is easy to follow ethical leaders who you can count on to do what is right and difficult to follow those who will do what is expedient or personally beneficial. However, ethical leadership is not always black and white as this case will portray. Difficult decisions must be made where it may not always be clear what to do. The names in the case have been changed although the situation is a real one.
Resumo:
The introduction of the Australian curriculum, the use of standardised testing (e.g. NAPLAN) and the My School website are couched in a context of accountability. This circumstance has stimulated and in some cases renewed a range of boundaries in Australian Education. The consequences that arise from standardised testing have accentuated the boundaries produced by social reproduction in education which has led to an increase in the numbers of students disengaging from mainstream education and applying for enrolment at the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN). Boundaries are created for many young people who are denied access to credentials and certification as a result of being excluded from or in some way disengaging from standardised education and testing. Young people who participate at the EREAFLCN arrive with a variety of forms of cultural capital that are not valued in current education and employment fields. This is not to say that these young people’s different forms of cultural capital have no value, but rather that such funds of knowledge, repertoires and cultural capital are not valued by the majority of powerful agents in educational and employment fields. How then can the qualitative value of traditionally unorthodox - yet often intricate, ingenious, and astute - versions of cultural capital evident in the habitus of many young people be made to count, be recognised, be valuated? Can a process of educational assessment be a field of capital exchange and a space which breaches boundaries through a valuating process? This paper reports on the development of an innovative approach to assessment in an alternative education institution designed for the re-engagement of ‘at risk’ youth who have left formal schooling. A case study approach has been used to document the engagement of six young people, with an educational approach described as assessment for learning as a field of exchange across two sites in the EREAFLCN. In order to capture the broad range of students’ cultural and social capital, an electronic portfolio system (EPS) is under trial. The model draws on categories from sociological models of capital and reconceptualises the eportfolio as a sociocultural zone of learning and development. Results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials. In this way restrictive boundaries can be breached and a more equitable outcome achieved for many young Australians.
Resumo:
The introduction of the Australian curriculum, the use of standardised testing (e.g. NAPLAN) and the My School website have stimulated and in some cases renewed a range of boundaries for young people in Australian Education. Standardised testing has accentuated social reproduction in education with an increase in the numbers of students disengaging from mainstream education and applying for enrolment at the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN). Many young people are denied access to credentials and certification as they become excluded from standardised education and testing. The creativity and skills of marginalised youth are often evidence of general capabilities and yet do not appear to be recognised in mainstream educational institutions when standardised approaches are adopted. Young people who participate at the EREAFLCN arrive with a variety of forms of cultural capital, frequently utilising general capabilities, which are not able to be valued in current education and employment fields. This is not to say that these young people‟s different forms of cultural capital have no value, but rather that such funds of knowledge, repertoires and cultural capital are not valued by the majority of powerful agents in educational and employment fields. How then can the inherent value of traditionally unorthodox - yet often intricate, ingenious, and astute-versions of cultural capital evident in the habitus of many young people be made to count, be recognised, be valuated?Can a process of educational assessment be a field of capital exchange and a space which crosses boundaries through a valuating process? This paper reports on the development of an innovative approach to assessment in an alternative education institution designed for the re engagement of „at risk‟ youth who have left formal schooling. A case study approach has been used to document the engagement of six young people, with an educational approach described as assessment for learning as a field of exchange across two sites in the EREAFLCN. In order to capture the broad range of students‟ cultural and social capital, an electronic portfolio system (EPS) is under trial. The model draws on categories from sociological models of capital and reconceptualises the eportfolio as a sociocultural zone of learning and development. Results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials. In this way restrictive boundaries can be breached and a more equitable outcome achieved for many young Australians.
Resumo:
Standardised testing does not recognise the creativity and skills of marginalised youth. Young people who come to the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN) in Australia arrive with forms of cultural capital that are not valued in the field of education and employment. This is not to say that young people‟s different modes of cultural capital have no value, but rather that such funds of knowledge, repertoires and cultural capital are not valued by the powerful agents in educational and employment fields. The forms of cultural capital which are valued by these institutions are measurable in certain structured formats which are largely inaccessible for what is seen in Australia to be a growing segment of the community. How then can the inherent value of traditionally unorthodox - yet often intricate, adroit, ingenious, and astute - versions of cultural capital evident in the habitus of many young people be made to count, be recognised, be valuated? Can a process of educational assessment be used as a marketplace, a field of capital exchange? This paper reports on the development of an innovative approach to assessment in an alternative education institution designed for the re-engagement of „at risk‟ youth who have left formal schooling. In order to capture the broad range of students‟ cultural and social capital, an electronic portfolio system (EPS) is under trial. The model draws on categories from sociological models of capital and reconceptualises the eportfolio as a sociocultural zone of learning and development. Initial results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials.
Resumo:
Background Although risk of human papillomavirus (HPV)–associated cancers of the anus, cervix, oropharynx, penis, vagina, and vulva is increased among persons with AIDS, the etiologic role of immunosuppression is unclear and incidence trends for these cancers over time, particularly after the introduction of highly active antiretroviral therapy in 1996, are not well described. Methods Data on 499 230 individuals diagnosed with AIDS from January 1, 1980, through December 31, 2004, were linked with cancer registries in 15 US regions. Risk of in situ and invasive HPV-associated cancers, compared with that in the general population, was measured by use of standardized incidence ratios (SIRs) and 95% confidence intervals (CIs). We evaluated the relationship of immunosuppression with incidence during the period of 4–60 months after AIDS onset by use of CD4 T-cell counts measured at AIDS onset. Incidence during the 4–60 months after AIDS onset was compared across three periods (1980–1989, 1990–1995, and 1996–2004). All statistical tests were two-sided. Results Among persons with AIDS, we observed statistically significantly elevated risk of all HPV-associated in situ (SIRs ranged from 8.9, 95% CI = 8.0 to 9.9, for cervical cancer to 68.6, 95% CI = 59.7 to 78.4, for anal cancer among men) and invasive (SIRs ranged from 1.6, 95% CI = 1.2 to 2.1, for oropharyngeal cancer to 34.6, 95% CI = 30.8 to 38.8, for anal cancer among men) cancers. During 1996–2004, low CD4 T-cell count was associated with statistically significantly increased risk of invasive anal cancer among men (relative risk [RR] per decline of 100 CD4 T cells per cubic millimeter = 1.34, 95% CI = 1.08 to 1.66, P = .006) and non–statistically significantly increased risk of in situ vagina or vulva cancer (RR = 1.52, 95% CI = 0.99 to 2.35, P = .055) and of invasive cervical cancer (RR = 1.32, 95% CI = 0.96 to 1.80, P = .077). Among men, incidence (per 100 000 person-years) of in situ and invasive anal cancer was statistically significantly higher during 1996–2004 than during 1990–1995 (61% increase for in situ cancers, 18.3 cases vs 29.5 cases, respectively; RR = 1.71, 95% CI = 1.24 to 2.35, P < .001; and 104% increase for invasive cancers, 20.7 cases vs 42.3 cases, respectively; RR = 2.03, 95% CI = 1.54 to 2.68, P < .001). Incidence of other cancers was stable over time. Conclusions Risk of HPV-associated cancers was elevated among persons with AIDS and increased with increasing immunosuppression. The increasing incidence for anal cancer during 1996–2004 indicates that prolonged survival may be associated with increased risk of certain HPV-associated cancers.
Resumo:
Introduction—Human herpesvirus 8 (HHV8) is necessary for Kaposi sarcoma (KS) to develop, but whether peripheral blood viral load is a marker of KS burden (total number of KS lesions), KS progression (the rate of eruption of new KS lesions), or both is unclear. We investigated these relationships in persons with AIDS. Methods—Newly diagnosed patients with AIDS-related KS attending Mulago Hospital, in Kampala, Uganda, were assessed for KS burden and progression by questionnaire and medical examination. Venous blood samples were taken for HHV8 load measurements by PCR. Associations were examined with odds ratio (OR) and 95% confidence intervals (CI) from logistic regression models and with t-tests. Results—Among 74 patients (59% men), median age was 34.5 years (interquartile range [IQR], 28.5-41). HHV8 DNA was detected in 93% and quantified in 77% patients. Median virus load was 3.8 logs10/106 peripheral blood cells (IQR 3.4-5.0) and was higher in men than women (4.4 vs. 3.8 logs; p=0.04), in patients with faster (>20 lesions per year) than slower rate of KS lesion eruption (4.5 vs. 3.6 logs; p<0.001), and higher, but not significantly, among patients with more (>median [20] KS lesions) than fewer KS lesions (4.4 vs. 4.0 logs; p=0.16). HHV8 load was unrelated to CD4 lymphocyte count (p=0.23). Conclusions—We show significant association of HHV8 load in peripheral blood with rate of eruption of KS lesions, but not with total lesion count. Our results suggest that viral load increases concurrently with development of new KS lesions.
Resumo:
Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.
Resumo:
This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.