193 resultados para Complement Fragments C5a
Resumo:
In March 2008, the Australian Government announced its intention to introduce a national Emissions Trading Scheme (ETS), now expected to start in 2015. This impending development provides an ideal setting to investigate the impact an ETS in Australia will have on the market valuation of Australian Securities Exchange (ASX) firms. This is the first empirical study into the pricing effects of the ETS in Australia. Primarily, we hypothesize that firm value will be negatively related to a firm's carbon intensity profile. That is, there will be a greater impact on firm value for high carbon emitters in the period prior (2007) to the introduction of the ETS, whether for reasons relating to the existence of unbooked liabilities associated with future compliance and/or abatement costs, or for reasons relating to reduced future earnings. Using a sample of 58 Australian listed firms (constrained by the current availability of emissions data) which comprise larger, more profitable and less risky listed Australian firms, we first undertake an event study focusing on five distinct information events argued to impact the probability of the proposed ETS being enacted. Here, we find direct evidence that the capital market is indeed pricing the proposed ETS. Second, using a modified version of the Ohlson (1995) valuation model, we undertake a valuation analysis designed not only to complement the event study results, but more importantly to provide insights into the capital market's assessment of the magnitude of the economic impact of the proposed ETS as reflected in market capitalization. Here, our results show that the market assesses the most carbon intensive sample firms a market value decrement relative to other sample firms of between 7% and 10% of market capitalization. Further, based on the carbon emission profile of the sample firms we imply a ‘future carbon permit price’ of between AUD$17 per tonne and AUD$26 per tonne of carbon dioxide emitted. This study is more precise than industry reports, which set a carbon price of between AUD$15 to AUD$74 per tonne.
Resumo:
Brisbane stands at the cross roads of many major economic, social and cultural opportunities as it positions itself as a cosmopolitan, globally networked metropolis of the twenty-first century. In order to link and leverage the existing screen industries infrastructure into Brisbane’s creative city’s plans, the paper argues for a re-think of the existing policy frameworks that support Australian screen culture and the national screen industries. Instead of remaining premised on a separation of these two activities the paper argues for a greater recognition of the overlaps occurring in both production and consumption of screen content. By acknowledging the impact new media technologies and social behaviours and the way they are re-shaping media consumption and media production practices, film and media policy could be better positioned to complement the emerging creative city policy frameworks that are being fostered in a city like Brisbane. The paper argues that reconsideration of the culture/industry separation that characterizes contemporary policy settings underpinning Australian media and screen production assistance would not only assist in identifying crucial synergies within a creative city policy it would also invigorate policy settings for the screen industries and enable them to connect more efficiently to a shifting film and media production and consumption landscape.
Resumo:
The mechanical conditions in the repair tissues are known to influence the outcome of fracture healing. These mechanical conditions are determined by the stiffness of fixation and limb loading. Experimental studies have shown that there is a range of beneficial fixation stiffness for timely healing and that fixation stiffness that is either too flexible or too stiff impairs callus healing. However, much less is known about how mechanical conditions influence the biological processes that make up the sequence of bone repair and if indeed mechanical stimulation is required at all stages of repair. Secondary bone healing occurs through a sequence of events broadly characterised by inflammation, proliferation, consolidation and remodelling. It is our hypothesis that a change in fixation stiffness from very flexible to stiff can shorten the time to healing relative to constant fixation stiffness. Flexible fixation has the benefit of promoting greater callus formation and needs to be applied during the proliferative stage of repair. The greater callus size helps to stabilize the fragments earlier allowing mineralization to occur faster. Together with stable/rigid fixation applied during the latter stage of repair to ensure mineralization of the callus. The predicted benefits of inverse dynamization are shortened healing in comparison to very flexible fixation and healing time comparable or faster than stable fixation with greater callus stiffness.
Resumo:
This practice-led doctorate involved the development of a collection – a bricolage – of interwoven fragments of literary texts and visual imagery explor-ing questions of speculative fiction, urban space and embodiment. As a sup-plement to the creative work, I also developed an exegesis, using a combina-tion of theoretical and contextual analysis combined with critical reflections on my creative process and outputs. An emphasis on issues of creative practice and a sustained investigation into an aesthetics of fragmentation and assem-blage is organised around the concept and methodology of bricolage, the eve-ryday art of ‘making do’. The exegesis also addresses my interest in the city and urban forms of subjectivity and embodiment through the use of a range of theorists, including Michel de Certeau and Elizabeth Grosz.
Resumo:
Ancient sandstones include important reservoirs for hydrocarbons (oil and gas), but, in many cases, their ability to serve as reservoirs is heavily constrained by the effects of carbonate cements on porosity and permeability. This study investigated the controls on distribution and abundance of carbonate cements within the Jurassic Plover Formation, Browse Basin, North West Shelf, Australia. Samples were analysed petrographically with point counting of 59 thin sections and mineralogically with x-ray diffraction from two wells within the Torosa Gas Field. Selected samples were also analysed for stable isotopes of O and C. Sandstones are classified into eleven groups. Most abundant are quartzarenites and then calcareous quartzarenites. Lithology ranged between sandstones consisting of mostly quartz with scant or no carbonate in the form of cement or allochems, to sandstones with as much as 40% carbonate. The major sources of carbonate cement in Torosa 1 and Torosa 4 sandstones were found to be early, shallow marine diagenetic processes (including cementation), followed by calcite cementation and recrystallisation of cements and allochems during redistribution by meteoric waters. Blocky and sparry calcite cements, indicative of meteoric environments on the basis of stable isotope values and palaeotemperature assessment, overprinted the initial shallow marine cement phase in all cases and meteoric cements are dominant. Torosa 4 was influenced more by marine settings than Torosa 1, and thus has the greater potential for calcite cement. The relatively low compaction of calcite-cemented sandstones and the stable isotope data suggest deep burial cementation was not a major factor. Insufficient volcanic rock fragments or authigenic clay content infers alteration of feldspars was not a major source of calcite. Very little feldspar is present, altered or otherwise. Hence, increased alkalinity from feldspar dissolution is not a contributing factor in cement formation. Increased alkalinity from bacterial sulphate reduction in organic–rich fine sediments may have driven limited cementation in some samples. The main definable and significant source of diagenetic marine calcite cement originated from original marine cements and the nearby dissolution of biogenic sources (allochems) at relatively shallow depths. Later diagenetic fluids emplaced minor dolomite, but this cement did not greatly affect the reservoir quality in the samples studied.
Resumo:
While Magentic Resonance Imaging and Ultrasound are used extensively for non-acute shoulder imaging, plain images are regularly required as a first investigation. This paper presents a snapshot of the diversity of projections performed and a review of the current evidence of the most appropriate projections. The projections recommended are suitable as a first investigation, and also to complement more advanced imaging.
Resumo:
This paper investigates engaging experienced birders, as volunteer citizen scientists, to analyze large recorded audio datasets gathered through environmental acoustic monitoring. Although audio data is straightforward to gather, automated analysis remains a challenging task; the existing expertise, local knowledge and motivation of the birder community can complement computational approaches and provide distinct benefits. We explored both the culture and practice of birders, and paradigms for interacting with recorded audio data. A variety of candidate design elements were tested with birders. This study contributes an understanding of how virtual interactions and practices can be developed to complement existing practices of experienced birders in the physical world. In so doing this study contributes a new approach to engagement in e-science. Whereas most citizen science projects task lay participants with discrete real world or artificial activities, sometimes using extrinsic motivators, this approach builds on existing intrinsically satisfying practices.
Resumo:
The latest paradigm shift in government, termed Transformational Government, puts the citizen in the centre of attention. Including citizens in the design of online one-stop portals can help governmental organisations to become more customer focussed. This study describes the initial efforts of an Australian state government to develop an information architecture to structure the content of their future one-stop portal. Hereby, card sorting exercises have been conducted and analysed, utilising contemporary approaches found in academic and non-scientific literature. This paper describes the findings of the card sorting exercises in this particular case and discusses the suitability of the applied approaches in general. These are distinguished into non-statistical, statistical, and hybrid approaches. Thus, on the one hand, this paper contributes to academia by describing the application of different card sorting approaches and discussing their strengths and weaknesses. On the other hand, this paper contributes to practice by explaining the approach that has been taken by the authors’ research partner in order to develop a customer-focussed governmental one-stop portal. Thus, they provide decision support for practitioners with regard to different analysis methods that can be used to complement recent approaches in Transformational Government.
Resumo:
In Social Science (Organization Studies, Economics, Management Science, Strategy, International Relations, Political Science…) the quest for addressing the question “what is a good practitioner?” has been around for centuries, with the underlying assumptions that good practitioners should lead organizations to higher levels of performance. Hence to ask “what is a good “captain”?” is not a new question, we should add! (e.g. Tsoukas & Cummings, 1997, p. 670; Söderlund, 2004, p. 190). This interrogation leads to consider problems such as the relations between dichotomies Theory and Practice, rigor and relevance of research, ways of knowing and knowledge forms. On the one hand we face the “Enlightenment” assumptions underlying modern positivist Social science, grounded in “unity-of-science dream of transforming and reducing all kinds of knowledge to one basic form and level” and cause-effects relationships (Eikeland, 2012, p. 20), and on the other, the postmodern interpretivist proposal, and its “tendency to make all kinds of knowing equivalent” (Eikeland, 2012, p. 20). In the project management space, this aims at addressing one of the fundamental problems in the field: projects still do not deliver their expected benefits and promises and therefore the socio-economical good (Hodgson & Cicmil, 2007; Bredillet, 2010, Lalonde et al., 2012). The Cartesian tradition supporting projects research and practice for the last 60 years (Bredillet, 2010, p. 4) has led to the lack of relevance to practice of the current conceptual base of project management, despite the sum of research, development of standards, best & good practices and the related development of project management bodies of knowledge (Packendorff, 1995, p. 319-323; Cicmil & Hodgson, 2006, p. 2–6, Hodgson & Cicmil, 2007, p. 436–7; Winter et al., 2006, p. 638). Referring to both Hodgson (2002) and Giddens (1993), we could say that “those who expect a “social-scientific Newton” to revolutionize this young field “are not only waiting for a train that will not arrive, but are in the wrong station altogether” (Hodgson, 2002, p. 809; Giddens, 1993, p. 18). While, in the postmodern stream mainly rooted in the “practice turn” (e.g. Hällgren & Lindahl, 2012), the shift from methodological individualism to social viscosity and the advocated pluralism lead to reinforce the “functional stupidity” (Alvesson & Spicer, 2012, p. 1194) this postmodern stream aims at overcoming. We suggest here that addressing the question “what is a good PM?” requires a philosophy of practice perspective to complement the “usual” philosophy of science perspective. The questioning of the modern Cartesian tradition mirrors a similar one made within Social science (Say, 1964; Koontz, 1961, 1980; Menger, 1985; Warry, 1992; Rothbard, 1997a; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Boisot & McKelvey, 2010), calling for new thinking. In order to get outside the rationalist ‘box’, Toulmin (1990, p. 11), along with Tsoukas & Cummings (1997, p. 655), suggests a possible path, summarizing the thoughts of many authors: “It can cling to the discredited research program of the purely theoretical (i.e. “modern”) philosophy, which will end up by driving it out of business: it can look for new and less exclusively theoretical ways of working, and develop the methods needed for a more practical (“post-modern”) agenda; or it can return to its pre-17th century traditions, and try to recover the lost (“pre-modern”) topics that were side-tracked by Descartes, but can be usefully taken up for the future” (Toulmin, 1990, p. 11). Thus, paradoxically and interestingly, in their quest for the so-called post-modernism, many authors build on “pre-modern” philosophies such as the Aristotelian one (e.g. MacIntyre, 1985, 2007; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Blomquist et al., 2010; Lalonde et al., 2012). It is perhaps because the post-modern stream emphasizes a dialogic process restricted to reliance on voice and textual representation, it limits the meaning of communicative praxis, and weaking the practice because it turns away attention from more fundamental issues associated with problem-definition and knowledge-for-use in action (Tedlock, 1983, p. 332–4; Schrag, 1986, p. 30, 46–7; Warry, 1992, p. 157). Eikeland suggests that the Aristotelian “gnoseology allows for reconsidering and reintegrating ways of knowing: traditional, practical, tacit, emotional, experiential, intuitive, etc., marginalised and considered insufficient by modernist [and post-modernist] thinking” (Eikeland, 2012, p. 20—21). By contrast with the modernist one-dimensional thinking and relativist and pluralistic post-modernism, we suggest, in a turn to an Aristotelian pre-modern lens, to re-conceptualise (“re” involving here a “re”-turn to pre-modern thinking) the “do” and to shift the perspective from what a good PM is (philosophy of science lens) to what a good PM does (philosophy of practice lens) (Aristotle, 1926a). As Tsoukas & Cummings put it: “In the Aristotelian tradition to call something good is to make a factual statement. To ask, for example, ’what is a good captain’?’ is not to come up with a list of attributes that good captains share (as modem contingency theorists would have it), but to point out the things that those who are recognized as good captains do.” (Tsoukas & Cummings, 1997, p. 670) Thus, this conversation offers a dialogue and deliberation about a central question: What does a good project manager do? The conversation is organized around a critic of the underlying assumptions supporting the modern, post-modern and pre-modern relations to ways of knowing, forms of knowledge and “practice”.
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.
Resumo:
Mechanisms of intervention and the contexts they are used in interact in complex ways. This helps explain why we can’t overgeneralize about what works in respect of models of service designed to prevent or respond to homelessness. This said there are some key messages from the totality of evidence that has been accumulated to date. First homelessness would be a lot easier to prevent for first or subsequent episodes if adequate and appropriate (developmentally/ culturally) housing was available. Second (and often dependent on the first) timely support of a particular character ‘works’ both in a preventive sense and in periods when people experience ongoing challenges which may render them vulnerable to further homelessness. This paper reflects on some of the critical features of how we can generate and use evidence, and how these complement each other in important ways.
Resumo:
It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade leading to an array of approaches to business process variability modeling. This survey examines existing approaches in this field based on a common set of criteria and illustrates their key concepts using a running example. The analysis shows that existing approaches are characterized by the fact that they extend a conventional process mod- eling language with constructs that make it able to capture customizable process models. A customizable process model represents a family of process variants in a way that each variant can be derived by adding or deleting fragments according to configuration parameters or according to a domain model. The survey puts into evidence an abundance of customizable process modeling languages, embodying a diverse set of con- structs. In contrast, there is comparatively little tool support for analyzing and constructing customizable process models, as well as a scarcity of empirical evaluations of languages in the field.
Resumo:
Poor health and injury represent major obstacles to the future economic security of Australia. The national economic cost of work-related injury is estimated at $57.5 billion p/a. Since exposure to high physical demands is a major risk factor for musculoskeletal injury, monitoring and managing such physical activity levels in workers is a potentially important injury prevention strategy. Current injury monitoring practices are inadequate for the provision of clinically valuable information about the tissue specific responses to physical exertion. Injury of various soft tissue structures can manifest over time through accumulation of micro-trauma. Such micro-trauma has a propensity to increase the risk of acute injuries to soft-tissue structures such as muscle or tendon. As such, the capacity to monitor biomarkers that result from the disruption of these tissues offers a means of assisting the pre-emptive management of subclinical injury prior to acute failure or for evaluation of recovery processes. Here we have adopted an in-vivo exercise induced muscle damage model allowing the application of laboratory controlled conditions to assist in uncovering biochemical indicators associated with soft-tissue trauma and recovery. Importantly, urine was utilised as the diagnostic medium since it is non-invasive to collect, more acceptable to workers and less costly to employers. Moreover, it is our hypothesis that exercise induced tissue degradation products enter the circulation and are subsequently filtered by the kidney and pass through to the urine. To test this hypothesis a range of metabolomic and proteomic discovery-phase techniques were used, along with targeted approaches. Several small molecules relating to tissue damage were identified along with a series of skeletal muscle-specific protein fragments resulting from exercise induced soft-tissue damage. Each of the potential biomolecular markers appeared to be temporally present within urine. Moreover, the regulation of abundance seemed to be associated with functional recovery following the injury. This discovery may have important clinical applications for monitoring of a variety of inflammatory myopathies as well as novel applications in monitoring of the musculoskeletal health status of workers, professional athletes and/or military personnel to reduce the onset of potentially debilitating musculoskeletal injuries within these professions.
Resumo:
The genesis of ferruginous nodules and pisoliths in soils and weathering profiles of coastal southern and eastern Australia has long been debated. It is not clear whether iron (Fe) nodules are redox accumulations, residues of Miocene laterite duricrust, or the products of contemporary weathering of Fe-rich sedimentary rocks. This study combines a catchment-wide survey of Fe nodule distribution in Poona Creek catchment (Fraser Coast, Queensland) with detailed investigations of a representative ferric soil profile to show that Fe nodules are derived from Fe-rich sandstones. Where these crop out, they are broken down, transported downslope by colluvial processes, and redeposited. Chemical and physical weathering transforms these eroded rock fragments into non-magnetic Fe nodules. Major features of this transformation include lower hematite/goethite and kaolinite/gibbsite ratios, increased porosity, etching of quartz grains, and development of rounded morphology and a smooth outer cortex. Iron nodules are commonly concentrated in ferric horizons. We show that these horizons form as the result of differential biological mixing of the soil. Bioturbation gradually buries nodules and rock fragments deposited at the surface of the soil, resulting in a largely nodule-free 'biomantle' over a ferric 'stone line'. Maghemite-rich magnetic nodules are a prominent feature of the upper half of the profile. These are most likely formed by the thermal alteration of non-magnetic nodules located at the top of the profile during severe bushfires. They are subsequently redistributed through the soil profile by bioturbation. Iron nodules occurring in the study area are products of contemporary weathering of Fe-rich rock units. They are not laterite duricrust residues nor are they redox accumulations, although redox-controlled dissolution/re-precipitation is an important component of post-depositional modification of these Fe nodules.
Resumo:
1. Essential hypertension occurs in people with an underlying genetic predisposition who subject themselves to adverse environmental influences. The number of genes involved is unknown, as is the extent to which each contributes to final blood pressure and the severity of the disease. 2. In the past, studies of potential candidate genes have been performed by association (case-control) analysis of unrelated individuals or linkage (pedigree or sibpair) analysis of families. These studies have resulted in several positive findings but, as one may expect, also an enormous number of negative results. 3. In order to uncover the major genetic loci for essential hypertension, it is proposed that scanning the genome systematically in 100- 200 affected sibships should prove successful. 4. This involves genotyping sets of hypertensive sibships to determine their complement of several hundred microsatellite polymorphisms. Those that are highly informative, by having a high heterozygosity, are most suitable. Also, the markers need to be spaced sufficiently evenly across the genome so as to ensure adequate coverage. 5. Tests are performed to determine increased segregation of alleles of each marker with hypertension. The analytical tools involve specialized statistical programs that can detect such differences. Non- parametric multipoint analysis is an appropriate approach. 6. In this way, loci for essential hypertension are beginning to emerge.