955 resultados para Quantified Autoepistemic Logic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the current status of a program to develop an automated forced landing system for a fixed-wing Unmanned Aerial Vehicle (UAV). This automated system seeks to emulate human pilot thought processes when planning for and conducting an engine-off emergency landing. Firstly, a path planning algorithm that extends Dubins curves to 3D space is presented. This planning element is then combined with a nonlinear guidance and control logic, and simulated test results demonstrate the robustness of this approach to strong winds during a glided descent. The average path deviation errors incurred are comparable to or even better than that of manned, powered aircraft. Secondly, a study into suitable multi-criteria decision making approaches and the problems that confront the decision-maker is presented. From this study, it is believed that decision processes that utilize human expert knowledge and fuzzy logic reasoning are most suited to the problem at hand, and further investigations will be conducted to identify the particular technique/s to be implemented in simulations and field tests. The automated UAV forced landing approach presented in this paper is promising, and will allow the progression of this technology from the development and simulation stages through to a prototype system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the oldest problems in philosophy concerns the relationship between free will and moral responsibility. If we adopt the position that we lack free will, in the absolute sense—as have most philosophers who have addressed this issue—how can we truly be held accountable for what we do? This paper will contend that the most significant and interesting challenge to the long-standing status-quo on the matter comes not from philosophy, jurisprudence, or even physics, but rather from psychology. By examining this debate through the lens of contemporary behaviour disorders, such as ADHD, it will be argued that notions of free will, along with its correlate, moral responsibility, are being eroded through the logic of psychology which is steadily reconfiguring large swathes of familiar human conduct as pathology. The intention of the paper is not only to raise some concerns over the exponential growth of behaviour disorders, but also, and more significantly, to flag the ongoing relevance of philosophy for prying open contemporary educational problems in new and interesting ways.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rather than understanding the recurrent failure of various attempts at crime control as unfortunate and undesirable aberrations, all too familiar glitches an otherwise uninterrupted teleological march to a better society, such failures are instead positioned as part of the fabric of late modernity itself. That is, society changes not according to a predetermined logic along neatly defined and clearly reasoned tracks, rather it hurtles from crisis to crisis, from failure to failure, and it is the regulation of that failure which produces new initiatives and new forms of governance. Utilising the example of the modern prison, this chapter contends that too great an emphasis upon this institution’s ‘failure’ results not only in a neglect of the many other functions that it serves in the regulation of difference, but also, and more generally, it results in an underestimation of the importance of failure in providing new impetus for social transformation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the oldest problems in philosophy concerns the relationship between free will and moral responsibility. If we adopt the position that we lack free will, in the absolute sense—as have most philosophers who have addressed this issue—how can we truly be held accountable for what we do? This paper will contend that the most significant and interesting challenge to the long-standing status-quo on the matter comes not from philosophy, jurisprudence, or even physics, but rather from psychology. By examining this debate through the lens of contemporary behaviour disorders, such as ADHD, it will be argued that notions of free will, along with its correlate, moral responsibility, are being eroded through the logic of psychology which is steadily reconfiguring large swathes of familiar human conduct as pathology. The intention of the paper is not only to raise some concerns over the exponential growth of behaviour disorders, but also, and more significantly, to flag the ongoing relevance of philosophy for prying open contemporary educational problems in new and interesting ways.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper has two central purposes: the first is to survey some of the more important examples of fallacious argument, and the second is to examine the frequent use of these fallacies in support of the psychological construct: Attention Deficit Hyperactivity Disorder (ADHD). The paper divides twelve familiar fallacies into three different categories—material, psychological and logical—and contends that advocates of ADHD often seem to employ these fallacies to support their position. It is suggested that all researchers, whether into ADHD or otherwise, need to pay much closer attention to the construction of their arguments if they are not to make truth claims unsupported by satisfactory evidence, form or logic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proliferation of innovative schemes to address climate change at international, national and local levels signals a fundamental shift in the priority and role of the natural environment to society, organizations and individuals. This shift in shared priorities invites academics and practitioners to consider the role of institutions in shaping and constraining responses to climate change at multiple levels of organisations and society. Institutional theory provides an approach to conceptualising and addressing climate change challenges by focusing on the central logics that guide society, organizations and individuals and their material and symbolic relationship to the environment. For example, framing a response to climate change in the form of an emission trading scheme evidences a practice informed by a capitalist market logic (Friedland and Alford 1991). However, not all responses need necessarily align with a market logic. Indeed, Thornton (2004) identifies six broad societal sectors each with its own logic (markets, corporations, professions, states, families, religions). Hence, understanding the logics that underpin successful –and unsuccessful– climate change initiatives contributes to revealing how institutions shape and constrain practices, and provides valuable insights for policy makers and organizations. This paper develops models and propositions to consider the construction of, and challenges to, climate change initiatives based on institutional logics (Thornton and Ocasio 2008). We propose that the challenge of understanding and explaining how climate change initiatives are successfully adopted be examined in terms of their institutional logics, and how these logics evolve over time. To achieve this, a multi-level framework of analysis that encompasses society, organizations and individuals is necessary (Friedland and Alford 1991). However, to date most extant studies of institutional logics have tended to emphasize one level over the others (Thornton and Ocasio 2008: 104). In addition, existing studies related to climate change initiatives have largely been descriptive (e.g. Braun 2008) or prescriptive (e.g. Boiral 2006) in terms of the suitability of particular practices. This paper contributes to the literature on logics by examining multiple levels: the proliferation of the climate change agenda provides a site in which to study how institutional logics are played out across multiple, yet embedded levels within society through institutional forums in which change takes place. Secondly, the paper specifically examines how institutional logics provide society with organising principles –material practices and symbolic constructions– which enable and constrain their actions and help define their motives and identity. Based on this model, we develop a series of propositions of the conditions required for the successful introduction of climate change initiatives. The paper proceeds as follows. We present a review of literature related to institutional logics and develop a generic model of the process of the operation of institutional logics. We then consider how this is applied to key initiatives related to climate change. Finally, we develop a series of propositions which might guide insights into the successful implementation of climate change practices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With service interaction modelling, it is customary to distinguish between two types of models: choreographies and orchestrations. A choreography describes interactions within a collection of services from a global perspective, where no service plays a privileged role. Instead, services interact in a peer-to-peer manner. In contrast, an orchestration describes the interactions between one particular service, the orchestrator, and a number of partner services. The main proposition of this work is an approach to bridge these two modelling viewpoints by synthesising orchestrators from choreographies. To start with, choreographies are defined using a simple behaviour description language based on communicating finite state machines. From such a model, orchestrators are initially synthesised in the form of state machines. It turns out that state machines are not suitable for orchestration modelling, because orchestrators generally need to engage in concurrent interactions. To address this issue, a technique is proposed to transform state machines into process models in the Business Process Modelling Notation (BPMN). Orchestrations represented in BPMN can then be augmented with additional business logic to achieve value-adding mediation. In addition, techniques exist for refining BPMN models into executable process definitions. The transformation from state machines to BPMN relies on Petri nets as an intermediary representation and leverages techniques from theory of regions to identify concurrency in the initial Petri net. Once concurrency has been identified, the resulting Petri net is transformed into a BPMN model. The original contributions of this work are: an algorithm to synthesise orchestrators from choreographies and a rules-based transformation from Petri nets into BPMN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: The objectives of this article are to explore the extent to which the International Statistical Classification of Diseases and Related Health Problems (ICD) has been used in child abuse research, to describe how the ICD system has been applied and to assess factors affecting the reliability of ICD coded data in child abuse research.----- Methods: PubMed, CINAHL, PsychInfo and Google Scholar were searched for peer reviewed articles written since 1989 that used ICD as the classification system to identify cases and research child abuse using health databases. Snowballing strategies were also employed by searching the bibliographies of retrieved references to identify relevant associated articles. The papers identified through the search were independently screened by two authors for inclusion, resulting in 47 studies selected for the review. Due to heterogeneity of studies metaanalysis was not performed.----- Results: This paper highlights both utility and limitations of ICD coded data. ICD codes have been widely used to conduct research into child maltreatment in health data systems. The codes appear to be used primarily to determine child maltreatment patterns within identified diagnoses or to identify child maltreatment cases for research.----- Conclusions: A significant impediment to the use of ICD codes in child maltreatment research is the under-ascertainment of child maltreatment by using coded data alone. This is most clearly identified and, to some degree, quantified, in research where data linkage is used. Practice Implications: The importance of improved child maltreatment identification will assist in identifying risk factors and creating programs that can prevent and treat child maltreatment and assist in meeting reporting obligations under the CRC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Privacy enhancing protocols (PEPs) are a family of protocols that allow secure exchange and management of sensitive user information. They are important in preserving users’ privacy in today’s open environment. Proof of the correctness of PEPs is necessary before they can be deployed. However, the traditional provable security approach, though well established for verifying cryptographic primitives, is not applicable to PEPs. We apply the formal method of Coloured Petri Nets (CPNs) to construct an executable specification of a representative PEP, namely the Private Information Escrow Bound to Multiple Conditions Protocol (PIEMCP). Formal semantics of the CPN specification allow us to reason about various security properties of PIEMCP using state space analysis techniques. This investigation provides us with preliminary insights for modeling and verification of PEPs in general, demonstrating the benefit of applying the CPN-based formal approach to proving the correctness of PEPs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Insulin-like growth factor binding proteins (IGFBPs) are prime regulators of IGF-action in numerous cell types including the retinal pigment epithelium (RPE). The RPE performs several functions essential for vision, including growth factor secretion and waste removal via a phagocytic process mediated in part by vitronectin (Vn). In the course of studying the effects of IGFBPs on IGF-mediated VEGF secretion and Vn-mediated phagocytosis in the RPE cell line ARPE-19, we have discovered that these cells avidly ingest synthetic microspheres (2.0 μm diameter) coated with IGFBPs. Given the novelty of this finding and the established role for endocytosis in mediating IGFBP actions in other cell types, we have explored the potential role of candidate cell surface receptors. Moreover, we have examined the role of key IGFBP structural motifs, by comparing responses to three members of the IGFBP family (IGFBP-3, IGFBP-4 and IGFBP-5) which display overlapping variations in primary structure and glycosylation status. Coating of microspheres (FluoSpheres®, sulfate modified polystyrene filled with a fluorophore) was conducted at 37 °C for 1 h using 20 μg/mL of test protein, followed by extensive washing. Binding of proteins was confirmed using a microBCA assay. The negative control consisted of microspheres treated with 0.1% bovine serum albumin (BSA), and all test samples were post-treated with BSA in an effort to coat any remaining free protein binding sites, which might otherwise encourage non-specific interactions with the cell surface. Serum-starved cultures of ARPE-19 cells were incubated with microspheres for 24 h, using a ratio of approximately 100 microspheres per cell. Uptake of microspheres was quantified using a fluorometer and was confirmed visually by confocal fluorescence microscopy. The ARPE-19 cells displayed little affinity for BSA-treated microspheres, but avidly ingested large quantities of those pre-treated with Vn (ANOVA; p < 0.001). Strong responses were also observed towards recombinant formulations of non-glycosylated IGFBP-3, glycosylated IGFBP-3 and glycosylated IGFBP-5 (all p < 0.001), while glycosylated IGFBP-4 induced a relatively minor response (p < 0.05). The response to IGFBP-3 was unaffected in the presence of excess soluble IGFBP-3, IGF-I or Vn. Likewise, soluble IGFBP-3 did not induce uptake of BSA-treated microspheres. Antibodies to either the transferrin receptor or type 1 IGF-receptor displayed slight inhibitory effects on responses to IGFBPs and Vn. Heparin abolished responses to Vn, IGFBP-5 and non-glycosylated IGFBP-3, but only partially inhibited the response to glycosylated IGFBP-3. Our results demonstrate for the first time IGFBP-mediated endocytosis in ARPE-19 cells and suggest roles for the IGFBP-heparin-binding domain and glycosylation status. These findings have important implications for understanding the mechanisms of IGFBP actions on the RPE, and in particular suggest a role for IGFBP-endocytosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In children, joint hypermobility (typified by structural instability of joints) manifests clinically as neuro-muscular and musculo-skeletal conditions and conditions associated with development and organization of control of posture and gait (Finkelstein, 1916; Jahss, 1919; Sobel, 1926; Larsson, Mudholkar, Baum and Srivastava, 1995; Murray and Woo, 2001; Hakim and Grahame, 2003; Adib, Davies, Grahame, Woo and Murray, 2005:). The process of control of the relative proportions of joint mobility and stability, whilst maintaining equilibrium in standing posture and gait, is dependent upon the complex interrelationship between skeletal, muscular and neurological function (Massion, 1998; Gurfinkel, Ivanenko, Levik and Babakova, 1995; Shumway-Cook and Woollacott, 1995). The efficiency of this relies upon the integrity of neuro-muscular and musculo-skeletal components (ligaments, muscles, nerves), and the Central Nervous System’s capacity to interpret, process and integrate sensory information from visual, vestibular and proprioceptive sources (Crotts, Thompson, Nahom, Ryan and Newton, 1996; Riemann, Guskiewicz and Shields, 1999; Schmitz and Arnold, 1998) and development and incorporation of this into a representational scheme (postural reference frame) of body orientation with respect to internal and external environments (Gurfinkel et al., 1995; Roll and Roll, 1988). Sensory information from the base of support (feet) makes significant contribution to the development of reference frameworks (Kavounoudias, Roll and Roll, 1998). Problems with the structure and/ or function of any one, or combination of these components or systems, may result in partial loss of equilibrium and, therefore ineffectiveness or significant reduction in the capacity to interact with the environment, which may result in disability and/ or injury (Crotts et al., 1996; Rozzi, Lephart, Sterner and Kuligowski, 1999b). Whilst literature focusing upon clinical associations between joint hypermobility and conditions requiring therapeutic intervention has been abundant (Crego and Ford, 1952; Powell and Cantab, 1983; Dockery, in Jay, 1999; Grahame, 1971; Childs, 1986; Barton, Bird, Lindsay, Newton and Wright, 1995a; Rozzi, et al., 1999b; Kerr, Macmillan, Uttley and Luqmani, 2000; Grahame, 2001), there has been a deficit in controlled studies in which the neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility have been quantified and considered within the context of organization of postural control in standing balance and gait. This was the aim of this project, undertaken as three studies. The major study (Study One) compared the fundamental neuro-muscular and musculo-skeletal characteristics of 15 children with joint hypermobility, and 15 age (8 and 9 years), gender, height and weight matched non-hypermobile controls. Significant differences were identified between previously undiagnosed hypermobile (n=15) and non-hypermobile children (n=15) in passive joint ranges of motion of the lower limbs and lumbar spine, muscle tone of the lower leg and foot, barefoot CoP displacement and in parameters of barefoot gait. Clinically relevant differences were also noted in barefoot single leg balance time. There were no differences between groups in isometric muscle strength in ankle dorsiflexion, knee flexion or extension. The second comparative study investigated foot morphology in non-weight bearing and weight bearing load conditions of the same children with and without joint hypermobility using three dimensional images (plaster casts) of their feet. The preliminary phase of this study evaluated the casting technique against direct measures of foot length, forefoot width, RCSP and forefoot to rearfoot angle. Results indicated accurate representation of elementary foot morphology within the plaster images. The comparative study examined the between and within group differences in measures of foot length and width, and in measures above the support surface (heel inclination angle, forefoot to rearfoot angle, normalized arch height, height of the widest point of the heel) in the two load conditions. Results of measures from plaster images identified that hypermobile children have different barefoot weight bearing foot morphology above the support surface than non-hypermobile children, despite no differences in measures of foot length or width. Based upon the differences in components of control of posture and gait in the hypermobile group, identified in Study One and Study Two, the final study (Study Three), using the same subjects, tested the immediate effect of specifically designed custom-made foot orthoses upon balance and gait of hypermobile children. The design of the orthoses was evaluated against the direct measures and the measures from plaster images of the feet. This ascertained the differences in morphology of the modified casts used to mould the orthoses and the original image of the foot. The orthoses were fitted into standardized running shoes. The effect of the shoe alone was tested upon the non-hypermobile children as the non-therapeutic equivalent condition. Immediate improvement in balance was noted in single leg stance and CoP displacement in the hypermobile group together with significant immediate improvement in the percentage of gait phases and in the percentage of the gait cycle at which maximum plantar flexion of the ankle occurred in gait. The neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility are different from those of non-hypermobile children. The Beighton, Solomon and Soskolne (1973) screening criteria successfully classified joint hypermobility in children. As a result of this study joint hypermobility has been identified as a variable which must be controlled in studies of foot morphology and function in children. The outcomes of this study provide a basis upon which to further explore the association between joint hypermobility and neuro-muscular and musculo-skeletal conditions, and, have relevance for the physical education of children with joint hypermobility, for footwear and orthotic design processes, and, in particular, for clinical identification and treatment of children with joint hypermobility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motor vehicles are a major source of gaseous and particulate matter pollution in urban areas, particularly of ultrafine sized particles (diameters < 0.1 µm). Exposure to particulate matter has been found to be associated with serious health effects, including respiratory and cardiovascular disease, and mortality. Particle emissions generated by motor vehicles span a very broad size range (from around 0.003-10 µm) and are measured as different subsets of particle mass concentrations or particle number count. However, there exist scientific challenges in analysing and interpreting the large data sets on motor vehicle emission factors, and no understanding is available of the application of different particle metrics as a basis for air quality regulation. To date a comprehensive inventory covering the broad size range of particles emitted by motor vehicles, and which includes particle number, does not exist anywhere in the world. This thesis covers research related to four important and interrelated aspects pertaining to particulate matter generated by motor vehicle fleets. These include the derivation of suitable particle emission factors for use in transport modelling and health impact assessments; quantification of motor vehicle particle emission inventories; investigation of the particle characteristic modality within particle size distributions as a potential for developing air quality regulation; and review and synthesis of current knowledge on ultrafine particles as it relates to motor vehicles; and the application of these aspects to the quantification, control and management of motor vehicle particle emissions. In order to quantify emissions in terms of a comprehensive inventory, which covers the full size range of particles emitted by motor vehicle fleets, it was necessary to derive a suitable set of particle emission factors for different vehicle and road type combinations for particle number, particle volume, PM1, PM2.5 and PM1 (mass concentration of particles with aerodynamic diameters < 1 µm, < 2.5 µm and < 10 µm respectively). The very large data set of emission factors analysed in this study were sourced from measurement studies conducted in developed countries, and hence the derived set of emission factors are suitable for preparing inventories in other urban regions of the developed world. These emission factors are particularly useful for regions with a lack of measurement data to derive emission factors, or where experimental data are available but are of insufficient scope. The comprehensive particle emissions inventory presented in this thesis is the first published inventory of tailpipe particle emissions prepared for a motor vehicle fleet, and included the quantification of particle emissions covering the full size range of particles emitted by vehicles, based on measurement data. The inventory quantified particle emissions measured in terms of particle number and different particle mass size fractions. It was developed for the urban South-East Queensland fleet in Australia, and included testing the particle emission implications of future scenarios for different passenger and freight travel demand. The thesis also presents evidence of the usefulness of examining modality within particle size distributions as a basis for developing air quality regulations; and finds evidence to support the relevance of introducing a new PM1 mass ambient air quality standard for the majority of environments worldwide. The study found that a combination of PM1 and PM10 standards are likely to be a more discerning and suitable set of ambient air quality standards for controlling particles emitted from combustion and mechanically-generated sources, such as motor vehicles, than the current mass standards of PM2.5 and PM10. The study also reviewed and synthesized existing knowledge on ultrafine particles, with a specific focus on those originating from motor vehicles. It found that motor vehicles are significant contributors to both air pollution and ultrafine particles in urban areas, and that a standardized measurement procedure is not currently available for ultrafine particles. The review found discrepancies exist between outcomes of instrumentation used to measure ultrafine particles; that few data is available on ultrafine particle chemistry and composition, long term monitoring; characterization of their spatial and temporal distribution in urban areas; and that no inventories for particle number are available for motor vehicle fleets. This knowledge is critical for epidemiological studies and exposure-response assessment. Conclusions from this review included the recommendation that ultrafine particles in populated urban areas be considered a likely target for future air quality regulation based on particle number, due to their potential impacts on the environment. The research in this PhD thesis successfully integrated the elements needed to quantify and manage motor vehicle fleet emissions, and its novelty relates to the combining of expertise from two distinctly separate disciplines - from aerosol science and transport modelling. The new knowledge and concepts developed in this PhD research provide never before available data and methods which can be used to develop comprehensive, size-resolved inventories of motor vehicle particle emissions, and air quality regulations to control particle emissions to protect the health and well-being of current and future generations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is an experimental study into the permeability and compressibility properties of bagasse pulp pads. Three experimental rigs were custom-built for this project. The experimental work is complemented by modelling work. Both the steady-state and dynamic behaviour of pulp pads are evaluated in the experimental and modelling components of this project. Bagasse, the fibrous residue that remains after sugar is extracted from sugarcane, is normally burnt in Australia to generate steam and electricity for the sugar factory. A study into bagasse pulp was motivated by the possibility of making highly value-added pulp products from bagasse for the financial benefit of sugarcane millers and growers. The bagasse pulp and paper industry is a multibillion dollar industry (1). Bagasse pulp could replace eucalypt pulp which is more widely used in the local production of paper products. An opportunity exists for replacing the large quantity of mainly generic paper products imported to Australia. This includes 949,000 tonnes of generic photocopier papers (2). The use of bagasse pulp for paper manufacture is the main application area of interest for this study. Bagasse contains a large quantity of short parenchyma cells called ‘pith’. Around 30% of the shortest fibres are removed from bagasse prior to pulping. Despite the ‘depithing’ operations in conventional bagasse pulp mills, a large amount of pith remains in the pulp. Amongst Australian paper producers there is a perception that the high quantity of short fibres in bagasse pulp leads to poor filtration behaviour at the wet-end of a paper machine. Bagasse pulp’s poor filtration behaviour reduces paper production rates and consequently revenue when compared to paper production using locally made eucalypt pulp. Pulp filtration can be characterised by two interacting factors; permeability and compressibility. Surprisingly, there has previously been very little rigorous investigation into neither bagasse pulp permeability nor compressibility. Only freeness testing of bagasse pulp has been published in the open literature. As a result, this study has focussed on a detailed investigation of the filtration properties of bagasse pulp pads. As part of this investigation, this study investigated three options for improving the permeability and compressibility properties of Australian bagasse pulp pads. Two options for further pre-treating depithed bagasse prior to pulping were considered. Firstly, bagasse was fractionated based on size. Two bagasse fractions were produced, ‘coarse’ and ‘medium’ bagasse fractions. Secondly, bagasse was collected after being processed on two types of juice extraction technology, i.e. from a sugar mill and from a sugar diffuser. Finally one method of post-treating the bagasse pulp was investigated. The effects of chemical additives, which are known to improve freeness, were also assessed for their effect on pulp pad permeability and compressibility. Pre-treated Australian bagasse pulp samples were compared with several benchmark pulp samples. A sample of commonly used kraft Eucalyptus globulus pulp was obtained. A sample of depithed Argentinean bagasse, which is used for commercial paper production, was also obtained. A sample of Australian bagasse which was depithed as per typical factory operations was also produced for benchmarking purposes. The steady-state pulp pad permeability and compressibility parameters were determined experimentally using two purpose-built experimental rigs. In reality, steady-state conditions do not exist on a paper machine. The permeability changes as the sheet compresses over time. Hence, a dynamic model was developed which uses the experimentally determined steady-state permeability and compressibility parameters as inputs. The filtration model was developed with a view to designing pulp processing equipment that is suitable specifically for bagasse pulp. The predicted results of the dynamic model were compared to experimental data. The effectiveness of a polymeric and microparticle chemical additives for improving the retention of short fibres and increasing the drainage rate of a bagasse pulp slurry was determined in a third purpose-built rig; a modified Dynamic Drainage Jar (DDJ). These chemical additives were then used in the making of a pulp pad, and their effect on the steady-state and dynamic permeability and compressibility of bagasse pulp pads was determined. The most important finding from this investigation was that Australian bagasse pulp was produced with higher permeability than eucalypt pulp, despite a higher overall content of short fibres. It is thought this research outcome could enable Australian paper producers to switch from eucalypt pulp to bagasse pulp without sacrificing paper machine productivity. It is thought that two factors contributed to the high permeability of the bagasse pulp pad. Firstly, thicker cell walls of the bagasse pulp fibres resulted in high fibre stiffness. Secondly, the bagasse pulp had a large proportion of fibres longer than 1.3 mm. These attributes helped to reinforce the pulp pad matrix. The steady-state permeability and compressibility parameters for the eucalypt pulp were consistent with those found by previous workers. It was also found that Australian pulp derived from the ‘coarse’ bagasse fraction had higher steady-state permeability than the ‘medium’ fraction. However, there was no difference between bagasse pulp originating from a diffuser or a mill. The bagasse pre-treatment options investigated in this study were not found to affect the steady-state compressibility parameters of a pulp pad. The dynamic filtration model was found to give predictions that were in good agreement with experimental data for pads made from samples of pretreated bagasse pulp, provided at least some pith was removed prior to pulping. Applying vacuum to a pulp slurry in the modified DDJ dramatically reduced the drainage time. At any level of vacuum, bagasse pulp benefitted from chemical additives as quantified by reduced drainage time and increased retention of short fibres. Using the modified DDJ, it was observed that under specific conditions, a benchmark depithed bagasse pulp drained more rapidly than the ‘coarse’ bagasse pulp. In steady-state permeability and compressibility experiments, the addition of chemical additives improved the pad permeability and compressibility of a benchmark bagasse pulp with a high quantity of short fibres. Importantly, this effect was not observed for the ‘coarse’ bagasse pulp. However, dynamic filtration experiments showed that there was also a small observable improvement in filtration for the ‘medium’ bagasse pulp. The mechanism of bagasse pulp pad consolidation appears to be by fibre realignment. Chemical additives assist to lubricate the consolidation process. This study was complemented by pulp physical and chemical property testing and a microscopy study. In addition to its high pulp pad permeability, ‘coarse’ bagasse pulp often (but not always) had superior physical properties than a benchmark depithed bagasse pulp.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.