969 resultados para technology standard
Resumo:
Exceeding the speed limit and driving too fast for the conditions are regularly cited as significant contributing factors in traffic crashes, particularly fatal and serious injury crashes. Despite an extensive body of research highlighting the relationship between increased vehicle speeds and crash risk and severity, speeding remains a pervasive behaviour on Australian roads. The development of effective countermeasures designed to reduce the prevalence of speeding behaviour requires that this behaviour is well understood. The primary aim of this program of research was to develop a better understanding of the influence of drivers’ perceptions and attitudes toward police speed enforcement on speeding behaviour. Study 1 employed focus group discussions with 39 licensed drivers to explore the influence of perceptions relating to specific characteristics of speed enforcement policies and practices on drivers’ attitudes towards speed enforcement. Three primary factors were identified as being most influential: site selection; visibility; and automaticity (i.e., whether the enforcement approach is automated/camera-based or manually operated). Perceptions regarding these enforcement characteristics were found to influence attitudes regarding the perceived legitimacy and transparency of speed enforcement. Moreover, misperceptions regarding speed enforcement policies and practices appeared to also have a substantial impact on attitudes toward speed enforcement, typically in a negative direction. These findings have important implications for road safety given that prior research has suggested that the effectiveness of speed enforcement approaches may be reduced if efforts are perceived by drivers as being illegitimate, such that they do little to encourage voluntary compliance. Study 1 also examined the impact of speed enforcement approaches varying in the degree of visibility and automaticity on self-reported willingness to comply with speed limits. These discussions suggested that all of the examined speed enforcement approaches (see Section 1.5 for more details) generally showed potential to reduce vehicle speeds and encourage compliance with posted speed limits. Nonetheless, participant responses suggested a greater willingness to comply with approaches operated in a highly visible manner, irrespective of the corresponding level of automaticity of the approach. While less visible approaches were typically associated with poorer rates of driver acceptance (e.g., perceived as “sneaky” and “unfair”), participants reported that such approaches would likely encourage long-term and network-wide impacts on their own speeding behaviour, as a function of the increased unpredictability of operations and increased direct (specific deterrence) and vicarious (general deterrence) experiences with punishment. Participants in Study 1 suggested that automated approaches, particularly when operated in a highly visible manner, do little to encourage compliance with speed limits except in the immediate vicinity of the enforcement location. While speed cameras have been criticised on such grounds in the past, such approaches can still have substantial road safety benefits if implemented in high-risk settings. Moreover, site-learning effects associated with automated approaches can also be argued to be a beneficial by-product of enforcement, such that behavioural modifications are achieved even in the absence of actual enforcement. Conversely, manually operated approaches were reported to be associated with more network-wide impacts on behaviour. In addition, the reported acceptance of such methods was high, due to the increased swiftness of punishment, ability for additional illegal driving behaviours to be policed and the salutary influence associated with increased face-to-face contact with authority. Study 2 involved a quantitative survey conducted with 718 licensed Queensland drivers from metropolitan and regional areas. The survey sought to further examine the influence of the visibility and automaticity of operations on self-reported likelihood and duration of compliance. Overall, the results from Study 2 corroborated those of Study 1. All examined approaches were again found to encourage compliance with speed limits, such that all approaches could be considered to be “effective”. Nonetheless, significantly greater self-reported likelihood and duration of compliance was associated with visibly operated approaches, irrespective of the corresponding automaticity of the approach. In addition, the impact of automaticity was influenced by visibility; such that significantly greater self-reported likelihood of compliance was associated with manually operated approaches, but only when they are operated in a less visible fashion. Conversely, manually operated approaches were associated with significantly greater durations of self-reported compliance, but only when they are operated in a highly visible manner. Taken together, the findings from Studies 1 and 2 suggest that enforcement efforts, irrespective of their visibility or automaticity, generally encourage compliance with speed limits. However, the duration of these effects on behaviour upon removal of the enforcement efforts remains questionable and represents an area where current speed enforcement practices could possibly be improved. Overall, it appears that identifying the optimal mix of enforcement operations, implementing them at a sufficient intensity and increasing the unpredictability of enforcement efforts (e.g., greater use of less visible approaches, random scheduling) are critical elements of success. Hierarchical multiple regression analyses were also performed in Study 2 to investigate the punishment-related and attitudinal constructs that influence self-reported frequency of speeding behaviour. The research was based on the theoretical framework of expanded deterrence theory, augmented with three particular attitudinal constructs. Specifically, previous research examining the influence of attitudes on speeding behaviour has typically focussed on attitudes toward speeding behaviour in general only. This research sought to more comprehensively explore the influence of attitudes by also individually measuring and analysing attitudes toward speed enforcement and attitudes toward the appropriateness of speed limits on speeding behaviour. Consistent with previous research, a number of classical and expanded deterrence theory variables were found to significantly predict self-reported frequency of speeding behaviour. Significantly greater speeding behaviour was typically reported by those participants who perceived punishment associated with speeding to be less certain, who reported more frequent use of punishment avoidance strategies and who reported greater direct experiences with punishment. A number of interesting differences in the significant predictors among males and females, as well as younger and older drivers, were reported. Specifically, classical deterrence theory variables appeared most influential on the speeding behaviour of males and younger drivers, while expanded deterrence theory constructs appeared more influential for females. These findings have important implications for the development and implementation of speeding countermeasures. Of the attitudinal factors, significantly greater self-reported frequency of speeding behaviour was reported among participants who held more favourable attitudes toward speeding and who perceived speed limits to be set inappropriately low. Disappointingly, attitudes toward speed enforcement were found to have little influence on reported speeding behaviour, over and above the other deterrence theory and attitudinal constructs. Indeed, the relationship between attitudes toward speed enforcement and self-reported speeding behaviour was completely accounted for by attitudes toward speeding. Nonetheless, the complexity of attitudes toward speed enforcement are not yet fully understood and future research should more comprehensively explore the measurement of this construct. Finally, given the wealth of evidence (both in general and emerging from this program of research) highlighting the association between punishment avoidance and speeding behaviour, Study 2 also sought to investigate the factors that influence the self-reported propensity to use punishment avoidance strategies. A standard multiple regression analysis was conducted for exploratory purposes only. The results revealed that punishment-related and attitudinal factors significantly predicted approximately one fifth of the variance in the dependent variable. The perceived ability to avoid punishment, vicarious punishment experience, vicarious punishment avoidance and attitudes toward speeding were all significant predictors. Future research should examine these relationships more thoroughly and identify additional influential factors. In summary, the current program of research has a number of implications for road safety and speed enforcement policy and practice decision-making. The research highlights a number of potential avenues for the improvement of public education regarding enforcement efforts and provides a number of insights into punishment avoidance behaviours. In addition, the research adds strength to the argument that enforcement approaches should not only demonstrate effectiveness in achieving key road safety objectives, such as reduced vehicle speeds and associated crashes, but also strive to be transparent and legitimate, such that voluntary compliance is encouraged. A number of potential strategies are discussed (e.g., point-to-point speed cameras, intelligent speed adaptation. The correct mix and intensity of enforcement approaches appears critical for achieving optimum effectiveness from enforcement efforts, as well as enhancements in the unpredictability of operations and swiftness of punishment. Achievement of these goals should increase both the general and specific deterrent effects associated with enforcement through an increased perceived risk of detection and a more balanced exposure to punishment and punishment avoidance experiences.
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
This study demonstrates how to study fashion journalism from the point of view, that it is its own field of journalism, akin to other journalism beats such as politics, sports and health. There is scope here for comment on the co-evolution of fashion and journalism, leading to ‘fashion journalism’ developing as a distinct field of study in its own right. This research contributes more generally to the field of media and cultural studies, by developing the threepart producer/text/reader model, which is the standard ‘media studies’ analytical framework. The study of fashion media from a cultural studies perspective acknowledges that cultural studies has pioneered the formal study of both journalism and fashion, for instance in studies of women’s magazines; but it has not brought the two areas together sufficiently. What little work has been done, however, has allowed theorists to explore how magazines promote feminism and form culture, which acts as a step in concreting fashion’s importance theoretically. This thesis has contributed to cultural studies by showing the relationship between the corporate industry, of both fashion and media (producer), and the active audience (reader) can be rethought and brought up to date for the more interactive era of the 21st century.
Resumo:
In Australia, Vocational Education and Training (VET) programs are delivered in a variety of settings. You can be enrolled within a course in a high school, at a technical institution, private training provider or at your place of employment. Recognition of prior learning, on the job training and industry partnerships are strong factors supporting the change of delivery. The curriculum content within these programs has also changed. For example within the Business Services programs, the prerequisite and corequisite skill of touch keyboarding to an Australian Standard has moved from a core requirement in the 1990’s to an elective requirement in the 2000’s. Where a base skill becomes an elective skill, how does this effect the performance and outcomes for the learner, educator, employer and society as a whole? This paper will explore these issues and investigate the current position of standards within the VET curriculum today.
Resumo:
Background Total hip arthroplasty (THA) is a commonly performed procedure and numbers are increasing with ageing populations. One of the most serious complications in THA are surgical site infections (SSIs), caused by pathogens entering the wound during the procedure. SSIs are associated with a substantial burden for health services, increased mortality and reduced functional outcomes in patients. Numerous approaches to preventing these infections exist but there is no gold standard in practice and the cost-effectiveness of alternate strategies is largely unknown. Objectives The aim of this project was to evaluate the cost-effectiveness of strategies claiming to reduce deep surgical site infections following total hip arthroplasty in Australia. The objectives were: 1. Identification of competing strategies or combinations of strategies that are clinically relevant to the control of SSI related to hip arthroplasty 2. Evidence synthesis and pooling of results to assess the volume and quality of evidence claiming to reduce the risk of SSI following total hip arthroplasty 3. Construction of an economic decision model incorporating cost and health outcomes for each of the identified strategies 4. Quantification of the effect of uncertainty in the model 5. Assessment of the value of perfect information among model parameters to inform future data collection Methods The literature relating to SSI in THA was reviewed, in particular to establish definitions of these concepts, understand mechanisms of aetiology and microbiology, risk factors, diagnosis and consequences as well as to give an overview of existing infection prevention measures. Published economic evaluations on this topic were also reviewed and limitations for Australian decision-makers identified. A Markov state-transition model was developed for the Australian context and subsequently validated by clinicians. The model was designed to capture key events related to deep SSI occurring within the first 12 months following primary THA. Relevant infection prevention measures were selected by reviewing clinical guideline recommendations combined with expert elicitation. Strategies selected for evaluation were the routine use of pre-operative antibiotic prophylaxis (AP) versus no use of antibiotic prophylaxis (No AP) or in combination with antibiotic-impregnated cement (AP & ABC) or laminar air operating rooms (AP & LOR). The best available evidence for clinical effect size and utility parameters was harvested from the medical literature using reproducible methods. Queensland hospital data were extracted to inform patients’ transitions between model health states and related costs captured in assigned treatment codes. Costs related to infection prevention were derived from reliable hospital records and expert opinion. Uncertainty of model input parameters was explored in probabilistic sensitivity analyses and scenario analyses and the value of perfect information was estimated. Results The cost-effectiveness analysis was performed from a health services perspective using a hypothetical cohort of 30,000 THA patients aged 65 years. The baseline rate of deep SSI was 0.96% within one year of a primary THA. The routine use of antibiotic prophylaxis (AP) was highly cost-effective and resulted in cost savings of over $1.6m whilst generating an extra 163 QALYs (without consideration of uncertainty). Deterministic and probabilistic analysis (considering uncertainty) identified antibiotic prophylaxis combined with antibiotic-impregnated cement (AP & ABC) to be the most cost-effective strategy. Using AP & ABC generated the highest net monetary benefit (NMB) and an incremental $3.1m NMB compared to only using antibiotic prophylaxis. There was a very low error probability that this strategy might not have the largest NMB (<5%). Not using antibiotic prophylaxis (No AP) or using both antibiotic prophylaxis combined with laminar air operating rooms (AP & LOR) resulted in worse health outcomes and higher costs. Sensitivity analyses showed that the model was sensitive to the initial cohort starting age and the additional costs of ABC but the best strategy did not change, even for extreme values. The cost-effectiveness improved for a higher proportion of cemented primary THAs and higher baseline rates of deep SSI. The value of perfect information indicated that no additional research is required to support the model conclusions. Conclusions Preventing deep SSI with antibiotic prophylaxis and antibiotic-impregnated cement has shown to improve health outcomes among hospitalised patients, save lives and enhance resource allocation. By implementing a more beneficial infection control strategy, scarce health care resources can be used more efficiently to the benefit of all members of society. The results of this project provide Australian policy makers with key information about how to efficiently manage risks of infection in THA.
Resumo:
This study contributes to the understanding of the contribution of financial reserves to sustaining nonprofit organisations. Recognising the limited recent Australian research in the area of nonprofit financial vulnerability, it specifically examines financial reserves held by signatories to the Code of Conduct of the Australian Council for International Development (ACFID) for the years 2006 to 2010. As this period includes the Global Financial Crisis, it presents a unique opportunity to observe the role of savings in a period of heightened financial threats to sustainability. The need for nonprofit entities to maintain reserves, while appearing intuitively evident, is neither unanimously accepted nor supported by established theoretic constructs. Some early frameworks attempt to explain the savings behaviour of nonprofit organisations and its role in organisational sustainability. Where researchers have considered the issue, its treatment has usually been either purely descriptive or alternatively, peripheral to a broader attempt to predict financial vulnerability. Given the importance of nonprofit entities to civil society, the sustainability of these organisations during times of economic contraction, such as the recent Global Financial Crisis, is a significant issue. Widespread failure of nonprofits, or even the perception of failure, will directly affect, not only those individuals who access their public goods and services, but would also have impacts on public confidence in both government and the sectors’ ability to manage and achieve their purpose. This study attempts to ‘shine a light’ on the paradox inherent in considering nonprofit savings. On the one hand, a public prevailing view is that nonprofit organisations should not hoard and indeed, should spend all of their funds on the direct achievement of their purposes. Against this, is the commonsense need for a financial buffer if only to allow for the day to day contingencies of pay rises and cost increases. At the entity level, the extent of reserves accumulated (or not) is an important consideration for Management Boards. The general public are also interested in knowing the level of funds held by nonprofits as a measure of both their commitment to purpose and as an indicator of their effectiveness. There is a need to communicate the level and prevalence of reserve holdings, balancing the prudent hedging of uncertainty against a sense of resource hoarding in the mind of donors. Finally, funders (especially governments) are interested in knowing the appropriate level of reserves to facilitate the ongoing sustainability of the sector. This is particularly so where organisations are involved in the provision of essential public goods and services. At a scholarly level, the study seeks to provide a rationale for this behaviour within the context of appropriate theory. At a practical level, the study seeks to give an indication of the drivers for savings, the actual levels of reserves held within the sector studied, as well as an indication as to whether the presence of reserves did mitigate the effects of financial turmoil during the Global Financial Crisis. The argument is not whether there is a need to ensure sustainability of nonprofits, but rather how it is to be done and whether the holding of reserves (net assets) is an essential element is achieving this. While the study offers no simple answers, it does appear that the organisations studied present as two groups, the ‘savers’ who build reserves and keep ‘money in the bank’ and ‘spender-delivers’ who put their resources ‘on the ground’. To progress an understanding of this dichotomy, the study suggests a need to move from its current approach to one which needs to more closely explore accounts based empirical donor attitude and nonprofit Management Board strategy.
Resumo:
The expansion of city-regions, the increase in the standard of living and changing lifestyles have collectively led to an increase in housing demand. New residential areas are encroaching onto the city fringes including suburban and green field areas. Large and small developers are actively building houses ranging from a few blocks to master-planned style projects. These residential developments, particularly in major urban areas, represent a large portion of urban land use in Malaysia, and, thus, have become a major contributor to overall urban sustainability. There are three main types that comprise the mainstream, and form integral parts to contemporary urban residential developments, namely, subdivision developments, piecemeal developments, and master-planned developments. Many new master-planned developments market themselves as environmentally friendly, and provide layouts that encompass sustainable design and development. To date, however, there have been limited studies conducted to examine such claims or to ascertain which of these three residential development layouts is more sustainable. To fill this gap, this research was undertaken to develop a framework for assessing the level of sustainability of residential developments, focusing on their layouts at the neighbourhood level. The development of this framework adopted a mixed method research strategy and embedded research design to achieve the study aim and objectives. Data were collected from two main sources, where quantitative data were gathered from a three-round Delphi survey and spatial data from a layout plan. Sample respondents for surveys were selected from among experts in the field of the built environment, both from Malaysia and internationally. As for spatial data, three case studies – master-planned, piecemeal and subdivision developments representing different types of neighbourhood developments in Malaysia have been selected. Prior to application on the case studies, the appropriate framework was subjected to validation to ascertain its robustness for application in Malaysia. Following the application of the framework on the three case studies the results revealed that master-planned development scored a better level of sustainability compared to piecemeal and subdivision developments. The results generated from this framework are expected to provide evidence to the policy makers and development agencies as well as provide an awareness of the level of sustainability and the necessary collective efforts required for developing sustainable neighbourhoods. Continuous assessment can facilitate a comparison of sustainability over time for neighbourhoods as a means to monitor changes in the level of sustainability. In addition, the framework is able to identify any particular indicator (issue) that causes a significant impact on sustainability.
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
The Australian e-Health Research Centre and Queensland University of Technology recently participated in the TREC 2012 Medical Records Track. This paper reports on our methods, results and experience using an approach that exploits the concept and inter-concept relationships defined in the SNOMED CT medical ontology. Our concept-based approach is intended to overcome specific challenges in searching medical records, namely vocabulary mismatch and granularity mismatch. Queries and documents are transformed from their term-based originals into medical concepts as defined by the SNOMED CT ontology, this is done to tackle vocabulary mismatch. In addition, we make use of the SNOMED CT parent-child `is-a' relationships between concepts to weight documents that contained concept subsumed by the query concepts; this is done to tackle the problem of granularity mismatch. Finally, we experiment with other SNOMED CT relationships besides the is-a relationship to weight concepts related to query concepts. Results show our concept-based approach performed significantly above the median in all four performance metrics. Further improvements are achieved by the incorporation of weighting subsumed concepts, overall leading to improvement above the median of 28% infAP, 10% infNDCG, 12% R-prec and 7% Prec@10. The incorporation of other relations besides is-a demonstrated mixed results, more research is required to determined which SNOMED CT relationships are best employed when weighting related concepts.
Resumo:
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
Young drivers are overrepresented in motor vehicle crash rates, and their risk increases when carrying similar aged passengers. Graduated Driver Licensing strategies have demonstrated effectiveness in reducing fatalities among young drivers, however complementary approaches may further reduce crash rates. Previous studies conducted by the researchers have shown that there is considerable potential for a passenger focus in youth road safety interventions, particularly involving the encouragement of young passengers to intervene in their peers’ risky driving (Buckley, Chapman, Sheehan & Davidson, 2012). Additionally, this research has shown that technology-based applications may be a promising means of delivering passenger safety messages, particularly as young people are increasingly accessing web-based and mobile technologies. This research describes the participatory design process undertaken to develop a web-based road safety program, and involves feasibility testing of storyboards for a youth passenger safety application. Storyboards and framework web-based materials were initially developed for a passenger safety program, using the results of previous studies involving online and school-based surveys with young people. Focus groups were then conducted with 8 school staff and 30 senior school students at one public high school in the Australian Capital Territory. Young people were asked about the situations in which passengers may feel unsafe and potential strategies for intervening in their peers’ risky driving. Students were also shown the storyboards and framework web-based material and were asked to comment on design and content issues. Teachers were also shown the material and asked about their perceptions of program design and feasibility. The focus group data will be used as part of the participatory design process, in further developing the passenger safety program. This research describes an evidence-based approach to the development of a web-based application for youth passenger safety. The findings of this research and resulting technology will have important implications for the road safety education of senior high school students.
Resumo:
Pretretament is an essential and expensive processing step for the manufacturing of ethanol from lignocellulosic raw materials. Ionic liquids are a new class of solvents that have the potential to be used as pretreatment agents. The attractive characteristics of ionic liquid pretreatment of lignocellulosics such as thermal stability, dissolution properties, fractionation potential, cellulose decrystallisation capacity and saccharification impact are investigated in this thesis. Dissolution of bagasse with 1-butyl-3-methylimidazolium chloride ([C4mim]Cl) at high temperatures (110 �‹C to 160 �‹C) is investigated as a pretreatment process. Material balances are reported and used along with enzymatic saccharification data to identify optimum pretreatment conditions (150 �‹C for 90 min). At these conditions, the dissolved and reprecipitated material is enriched in cellulose, has a low crystallinity and the cellulose component is efficiently hydrolysed (93 %, 3 h, 15 FPU). At pretreatment temperatures < 150 �‹C, the undissolved material has only slightly lower crystallinity than the starting. At pretreatment temperatures . 150 �‹C, the undissolved material has low crystallinity and when combined with the dissolved material has a saccharification rate and extent similar to completely dissolved material (100 %, 3h, 15 FPU). Complete dissolution is not necessary to maximize saccharification efficiency at temperatures . 150 �‹C. Fermentation of [C4mim]Cl-pretreated, enzyme-saccharified bagasse to ethanol is successfully conducted (85 % molar glucose-to-ethanol conversion efficiency). As compared to standard dilute acid pretreatment, the optimised [C4mim]Cl pretreatment achieves substantially higher ethanol yields (79 % cf. 52 %) in less than half the processing time (pretreatment, saccharification, fermentation). Fractionation of bagasse partially dissolved in [C4mim]Cl to a polysaccharide rich and a lignin rich fraction is attempted using aqueous biphasic systems (ABSs) and single phase systems with preferential precipitation. ABSs of ILs and concentrated aqueous inorganic salt solutions are achievable (e.g. [C4mim]Cl with 200 g L-1 NaOH), albeit they exhibit a number of technical problems including phase convergence (which increases with increasing biomass loading) and deprotonation of imidazolium ILs (5 % - 8 % mol). Single phase fractionation systems comprising lignin solvents / cellulose antisolvents, viz. NaOH (2M) and acetone in water (1:1, volume basis), afford solids with, respectively, 40 % mass and 29 % mass less lignin than water precipitated solids. However, this delignification imparts little increase in saccharification rates and extents of these solids. An alternative single phase fractionation system is achieved simply by using water as an antisolvent. Regulating the water : IL ratio results in a solution that precipitates cellulose and maintains lignin in solution (0.5 water : IL mass ratio) in both [C4mim]Cl and 1-ethyl-3-methylimidazolium acetate ([C2mim]OAc)). This water based fractionation is applied in three IL pretreatments on bagasse ([C4mim]Cl, 1-ethyl-3-methyl imidazolium chloride ([C2mim]Cl) and [C2mim]OAc). Lignin removal of 10 %, 50 % and 60 % mass respectively is achieved although only 0.3 %, 1.5 % and 11.7 % is recoverable even after ample water addition (3.5 water : IL mass ratio) and acidification (pH . 1). In addition the recovered lignin fraction contains 70 % mass hemicelluloses. The delignified, cellulose-rich bagasse recovered from these three ILs is exposed to enzyme saccharification. The saccharification (24 h, 15 FPU) of the cellulose mass in starting bagasse, achieved by these pretreatments rank as: [C2mim]OAc (83 %)>>[C2mim]Cl (53 %)=[C4mim]Cl(53%). Mass balance determinations accounted for 97 % of starting bagasse mass for the [C4mim]Cl pretreatment , 81 % for [C2mim]Cl and 79 %for [C2mim]OAc. For all three IL treatments, the remaining bagasse mass (not accounted for by mass balance determinations) is mainly (more than half) lignin that is not recoverable from the liquid fraction. After pretreatment, 100 % mass of both ions of all three ILs were recovered in the liquid fraction. Compositional characteristics of [C2mim]OAc treated solids such as low lignin, low acetyl group content and preservation of arabinosyl groups are opposite to those of chloride IL treated solids. The former biomass characteristics resemble those imparted by aqueous alkali pretreatment while the latter resemble those of aqueous acid pretreatments. The 100 % mass recovery of cellulose in [C2mim]OAc as opposed to 53 % mass recovery in [C2mim]Cl further demonstrates this since the cellulose glycosidic bonds are protected under alkali conditions. The alkyl chain length decrease in the imidazolium cation of these ILs imparts higher rates of dissolution and losses, and increases the severity of the treatment without changing the chemistry involved.
Resumo:
Teacher professional standards have become a key policy mechanism for the reform of teaching and education in recent years. While standards policies claim to improve the quality of teaching and learning in schools today, this paper argues that a disjunction exists between the stated intentions of such programmes and the intelligibility of the practices of government in which they are invested. To this effect, the paper conducts an analytics of government of the recently released National Professional Standards for Teachers (Australian Institute for Teaching and School Leadership, 2011) arguing that the explicit, calculated rationality of the programme exists within a wider field of effects. Such analysis has the critical consequence of calling into question the claims of the programmers themselves thus breaching the self-evidence on which the standards rest.
Resumo:
We develop a stochastic endogenous growth model to explain the diversity in growth and inequality patterns and the non-convergence of incomes in transitional economies where an underdeveloped financial sector imposes an implicit, fixed cost on the diversification of idiosyncratic risk. In the model endogenous growth occurs through physical and human capital deepening, with the latter being the more dominant element. We interpret the fixed cost as a ‘learning by doing’ cost for entrepreneurs who undertake risk in the absence of well developed financial markets and institutions that help diversify such risk. As such, this cost may be interpreted as the implicit returns foregone due to the lack of diversification opportunities that would otherwise have been available, had such institutions been present. The analytical and numerical results of the model suggest three growth outcomes depending on the productivity differences between the projects and the fixed cost associated with the more productive project. We label these outcomes as poverty trap, dual economy and balanced growth. Further analysis of these three outcomes highlights the existence of a diversity within diversity. Specifically, within the ‘poverty trap’ and ‘dual economy’ scenarios growth and inequality patterns differ, depending on the initial conditions. This additional diversity allows the model to capture a richer range of outcomes that are consistent with the empirical experience of several transitional economies.
Resumo:
One aim of experimental economics is to try to better understand human economic decision making. Early research of the ultimatum bargaining game (Gueth et al., 1982) revealed that other motives than pure monetary reward play a role. Neuroeconomic research has introduced the recording of physiological observations as signals of emotional responses. In this study, we apply heart rate variability (HRV) measuring technology to explore the behaviour and physiological reactions of proposers and responders in the ultimatum bargaining game. Since this technology is small and non-intrusive, we are able to run the experiment in a standard experimental economic setup. We show that low o�ers by a proposer cause signs of mental stress in both the proposer and the responder, as both exhibit high ratios of low to high frequency activity in the HRV spectrum.