1000 resultados para PAF model
Resumo:
The main aim of radiotherapy is to deliver a dose of radiation that is high enough to destroy the tumour cells while at the same time minimising the damage to normal healthy tissues. Clinically, this has been achieved by assigning a prescription dose to the tumour volume and a set of dose constraints on critical structures. Once an optimal treatment plan has been achieved the dosimetry is assessed using the physical parameters of dose and volume. There has been an interest in using radiobiological parameters to evaluate and predict the outcome of a treatment plan in terms of both a tumour control probability (TCP) and a normal tissue complication probability (NTCP). In this study, simple radiobiological models that are available in a commercial treatment planning system were used to compare three dimensional conformal radiotherapy treatments (3D-CRT) and intensity modulated radiotherapy (IMRT) treatments of the prostate. Initially both 3D-CRT and IMRT were planned for 2 Gy/fraction to a total dose of 60 Gy to the prostate. The sensitivity of the TCP and the NTCP to both conventional dose escalation and hypo-fractionation was investigated. The biological responses were calculated using the Källman S-model. The complication free tumour control probability (P+) is generated from the combined NTCP and TCP response values. It has been suggested that the alpha/beta ratio for prostate carcinoma cells may be lower than for most other tumour cell types. The effect of this on the modelled biological response for the different fractionation schedules was also investigated.
Resumo:
John Frazer's architectural work is inspired by living and generative processes. Both evolutionary and revolutionary, it explores informatin ecologies and the dynamics of the spaces between objects. Fuelled by an interest in the cybernetic work of Gordon Pask and Norbert Wiener, and the possibilities of the computer and the "new science" it has facilitated, Frazer and his team of collaborators have conducted a series of experiments that utilize genetic algorithms, cellular automata, emergent behaviour, complexity and feedback loops to create a truly dynamic architecture. Frazer studied at the Architectural Association (AA) in London from 1963 to 1969, and later became unit master of Diploma Unit 11 there. He was subsequently Director of Computer-Aided Design at the University of Ulter - a post he held while writing An Evolutionary Architecture in 1995 - and a lecturer at the University of Cambridge. In 1983 he co-founded Autographics Software Ltd, which pioneered microprocessor graphics. Frazer was awarded a person chair at the University of Ulster in 1984. In Frazer's hands, architecture becomes machine-readable, formally open-ended and responsive. His work as computer consultant to Cedric Price's Generator Project of 1976 (see P84)led to the development of a series of tools and processes; these have resulted in projects such as the Calbuild Kit (1985) and the Universal Constructor (1990). These subsequent computer-orientated architectural machines are makers of architectural form beyond the full control of the architect-programmer. Frazer makes much reference to the multi-celled relationships found in nature, and their ongoing morphosis in response to continually changing contextual criteria. He defines the elements that describe his evolutionary architectural model thus: "A genetic code script, rules for the development of the code, mapping of the code to a virtual model, the nature of the environment for the development of the model and, most importantly, the criteria for selection. In setting out these parameters for designing evolutionary architectures, Frazer goes beyond the usual notions of architectural beauty and aesthetics. Nevertheless his work is not without an aesthetic: some pieces are a frenzy of mad wire, while others have a modularity that is reminiscent of biological form. Algorithms form the basis of Frazer's designs. These algorithms determine a variety of formal results dependent on the nature of the information they are given. His work, therefore, is always dynamic, always evolving and always different. Designing with algorithms is also critical to other architects featured in this book, such as Marcos Novak (see p150). Frazer has made an unparalleled contribution to defining architectural possibilities for the twenty-first century, and remains an inspiration to architects seeking to create responsive environments. Architects were initially slow to pick up on the opportunities that the computer provides. These opportunities are both representational and spatial: computers can help architects draw buildings and, more importantly, they can help architects create varied spaces, both virtual and actual. Frazer's work was groundbreaking in this respect, and well before its time.
Resumo:
This program of research examines the experience of chronic pain in a community sample. While, it is clear that like patient samples, chronic pain in non-patient samples is also associated with psychological distress and physical disability, the experience of pain across the total spectrum of pain conditions (including acute and episodic pain conditions) and during the early course of chronic pain is less clear. Information about these aspects of the pain experience is important because effective early intervention for chronic pain relies on identification of people who are likely to progress to chronicity post-injury. A conceptual model of the transition from acute to chronic pain was proposed by Gatchel (1991a). In brief, Gatchel’s model describes three stages that individuals who have a serious pain experience move through, each with worsening psychological dysfunction and physical disability. The aims of this program of research were to describe the experience of pain in a community sample in order to obtain pain-specific data on the problem of pain in Queensland, and to explore the usefulness of Gatchel’s Model in a non-clinical sample. Additionally, five risk factors and six protective factors were proposed as possible extensions to Gatchel’s Model. To address these aims, a prospective longitudinal mixed-method research design was used. Quantitative data was collected in Phase 1 via a comprehensive postal questionnaire. Phase 2 consisted of a follow-up questionnaire 3 months post-baseline. Phase 3 consisted of semi-structured interviews with a subset of the original sample 12 months post follow-up, which used qualitative data to provide a further in-depth examination of the experience and process of chronic pain from respondents’ point of view. The results indicate chronic pain is associated with high levels of anxiety and depressive symptoms. However, the levels of disability reported by this Queensland sample were generally lower than those reported by clinical samples and consistent with disability data reported in a New South Wales population-based study. With regard to the second aim of this program of research, while some elements of the pain experience of this sample were consistent with that described by Gatchel’s Model, overall the model was not a good fit with the experience of this non-clinical sample. The findings indicate that passive coping strategies (minimising activity), catastrophising, self efficacy, optimism, social support, active strategies (use of distraction) and the belief that emotions affect pain may be important to consider in understanding the processes that underlie the transition to and continuation of chronic pain.
Resumo:
Providing support for reversible transformations as a basis for round-trip engineering is a significant challenge in model transformation research. While there are a number of current approaches, they require the underlying transformation to exhibit an injective behaviour when reversing changes. This however, does not serve all practical transformations well. In this paper, we present a novel approach to round-trip engineering that does not place restrictions on the nature of the underlying transformation. Based on abductive logic programming, it allows us to compute a set of legitimate source changes that equate to a given change to the target model. Encouraging results are derived from an initial prototype that supports most concepts of the Tefkat transformation language
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
In architecture courses, instilling a wider understanding of the industry specific representations practiced in the Building Industry is normally done under the auspices of Technology and Science subjects. Traditionally, building industry professionals communicated their design intentions using industry specific representations. Originally these mainly two dimensional representations such as plans, sections, elevations, schedules, etc. were produced manually, using a drawing board. Currently, this manual process has been digitised in the form of Computer Aided Design and Drafting (CADD) or ubiquitously simply CAD. While CAD has significant productivity and accuracy advantages over the earlier manual method, it still only produces industry specific representations of the design intent. Essentially, CAD is a digital version of the drawing board. The tool used for the production of these representations in industry is still mainly CAD. This is also the approach taken in most traditional university courses and mirrors the reality of the situation in the building industry. A successor to CAD, in the form of Building Information Modelling (BIM), is presently evolving in the Construction Industry. CAD is mostly a technical tool that conforms to existing industry practices. BIM on the other hand is revolutionary both as a technical tool and as an industry practice. Rather than producing representations of design intent, BIM produces an exact Virtual Prototype of any building that in an ideal situation is centrally stored and freely exchanged between the project team. Essentially, BIM builds any building twice: once in the virtual world, where any faults are resolved, and finally, in the real world. There is, however, no established model for learning through the use of this technology in Architecture courses. Queensland University of Technology (QUT), a tertiary institution that maintains close links with industry, recognises the importance of equipping their graduates with skills that are relevant to industry. BIM skills are currently in increasing demand throughout the construction industry through the evolution of construction industry practices. As such, during the second half of 2008, QUT 4th year architectural students were formally introduced for the first time to BIM, as both a technology and as an industry practice. This paper will outline the teaching team’s experiences and methodologies in offering a BIM unit (Architectural Technology and Science IV) at QUT for the first time and provide a description of the learning model. The paper will present the results of a survey on the learners’ perspectives of both BIM and their learning experiences as they learn about and through this technology.
Resumo:
Recent decisions of the Family Court of Australian reflect concerns over the adversarial nature of the legal process. The processes and procedures of the judicial system militate against a detailed examination of the issues and rights of the parties in dispute. The limitations of the family law framework are particularly demonstrated in disputes over the custody of children where the Court has tended to neglect the rights and interests of the primary carer. An alternative "unified family court" framework will be examined in which the Court pursues a more active and interventionist approach in the determination of family law disputes.
Resumo:
It has previously been found that complexes comprised of vitronectin and growth factors (VN:GF) enhance keratinocyte protein synthesis and migration. More specifically, these complexes have been shown to significantly enhance the migration of dermal keratinocytes derived from human skin. In view of this, it was thought that these complexes may hold potential as a novel therapy for healing chronic wounds. However, there was no evidence indicating that the VN:GF complexes would retain their effect on keratinocytes in the presence of chronic wound fluid. The studies in this thesis demonstrate for the first time that the VN:GF complexes not only stimulate proliferation and migration of keratinocytes, but also these effects are maintained in the presence of chronic wound fluid in a 2-dimensional (2-D) cell culture model. Whilst the 2-D culture system provided insights into how the cells might respond to the VN:GF complexes, this investigative approach is not ideal as skin is a 3-dimensional (3-D) tissue. In view of this, a 3-D human skin equivalent (HSE) model, which reflects more closely the in vivo environment, was used to test the VN:GF complexes on epidermopoiesis. These studies revealed that the VN:GF complexes enable keratinocytes to migrate, proliferate and differentiate on a de-epidermalised dermis (DED), ultimately forming a fully stratified epidermis. In addition, fibroblasts were seeded on DED and shown to migrate into the DED in the presence of the VN:GF complexes and hyaluronic acid, another important biological factor in the wound healing cascade. This HSE model was then further developed to enable studies examining the potential of the VN:GF complexes in epidermal wound healing. Specifically, a reproducible partial-thickness HSE wound model was created in fully-defined media and monitored as it healed. In this situation, the VN:GF complexes were shown to significantly enhance keratinocyte migration and proliferation, as well as differentiation. This model was also subsequently utilized to assess the wound healing potential of a synthetic fibrin-like gel that had previously been demonstrated to bind growth factors. Of note, keratinocyte re-epitheliasation was shown to be markedly improved in the presence of this 3-D matrix, highlighting its future potential for use as a delivery vehicle for the VN:GF complexes. Furthermore, this synthetic fibrin-like gel was injected into a 4 mm diameter full-thickness wound created in the HSE, both keratinocytes and fibroblasts were shown to migrate into this gel, as revealed by immunofluorescence. Interestingly, keratinocyte migration into this matrix was found to be dependent upon the presence of the fibroblasts. Taken together, these data indicate that reproducible wounds, as created in the HSEs, provide a relevant ex vivo tool to assess potential wound healing therapies. Moreover, the models will decrease our reliance on animals for scientific experimentation. Additionally, it is clear that these models will significantly assist in the development of novel treatments, such as the VN:GF complexes and the synthetic fibrin-like gel described herein, ultimately facilitating their clinical trial in the treatment of chronic wounds.
Resumo:
Chronic wounds are a significant socioeconomic problem for governments worldwide. Approximately 15% of people who suffer from diabetes will experience a lower-limb ulcer at some stage of their lives, and 24% of these wounds will ultimately result in amputation of the lower limb. Hyperbaric Oxygen Therapy (HBOT) has been shown to aid the healing of chronic wounds; however, the causal reasons for the improved healing remain unclear and hence current HBOT protocols remain empirical. Here we develop a three-species mathematical model of wound healing that is used to simulate the application of hyperbaric oxygen therapy in the treatment of wounds. Based on our modelling, we predict that intermittent HBOT will assist chronic wound healing while normobaric oxygen is ineffective in treating such wounds. Furthermore, treatment should continue until healing is complete, and HBOT will not stimulate healing under all circumstances, leading us to conclude that finding the right protocol for an individual patient is crucial if HBOT is to be effective. We provide constraints that depend on the model parameters for the range of HBOT protocols that will stimulate healing. More specifically, we predict that patients with a poor arterial supply of oxygen, high consumption of oxygen by the wound tissue, chronically hypoxic wounds, and/or a dysfunctional endothelial cell response to oxygen are at risk of nonresponsiveness to HBOT. The work of this paper can, in some way, highlight which patients are most likely to respond well to HBOT (for example, those with a good arterial supply), and thus has the potential to assist in improving both the success rate and hence the costeffectiveness of this therapy.
Resumo:
The weaknesses of ‗traditional‘ modes of instruction in accounting education have been widely discussed. Many contend that the traditional approach limits the ability to provide opportunities for students to raise their competency level and allow them to apply knowledge and skills in professional problem solving situations. However, the recent body of literature suggests that accounting educators are indeed actively experimenting with ‗non-traditional‘ and ‗innovative‘ instructional approaches, where some authors clearly favour one approach over another. But can one instructional approach alone meet the necessary conditions for different learning objectives? Taking into account the ever changing landscape of not only business environments, but also the higher education sector, the premise guiding the collaborators in this research is that it is perhaps counter productive to promote competing dichotomous views of ‗traditional‘ and ‗non-traditional‘ instructional approaches to accounting education, and that the notion of ‗blended learning‘ might provide a useful framework to enhance the learning and teaching of accounting. This paper reports on the first cycle of a longitudinal study, which explores the possibility of using blended learning in first year accounting at one campus of a large regional university. The critical elements of blended learning which emerged in the study are discussed and, consistent with the design-based research framework, the paper also identifies key design modifications for successive cycles of the research.
Resumo:
An earlier CRC-CI project on ‘automatic estimating’ (AE) has shown the key benefit of model-based design methodologies in building design and construction to be the provision of timely quantitative cost evaluations. Furthermore, using AE during design improves design options, and results in improved design turn-around times, better design quality and/or lower costs. However, AEs for civil engineering structures do not exist; and research partners in the CRC-CI expressed interest in exploring the development of such a process. This document reports on these investigations. The central objective of the study was to evaluate the benefits and costs of developing an AE for concrete civil engineering works. By studying existing documents and through interviews with design engineers, contractors and estimators, we have established that current civil engineering practices (mainly roads/bridges) do not use model-based planning/design. Drawings are executed in 2D and only completed at the end of lengthy planning/design project management lifecycle stages. We have also determined that estimating plays two important, but different roles. The first is part of project management (which we have called macro level estimating). Estimating in this domain sets project budgets, controls quality delivery and contains costs. The second role is estimating during planning/design (micro level estimating). The difference between the two roles is that the former is performed at the end of various lifecycle stages, whereas the latter is performed at any suitable time during planning/design.
Resumo:
Principal Topic Small and micro-enterprises are believed to play a significant part in economic growth and poverty allevition in developing countries. However, there are a range of issues that arise when looking at the support required for local enterprise development, the role of micro finance and sustainability. This paper explores the issues associated with the establishment and resourcing of micro-enterprise develoment and proposes a model of sustainable support of enterprise development in very poor developing economies, particularly in Africa. The purpose of this paper is to identify and address the range of issues raised by the literature and empirical research in Africa, regarding micro-finance and small business support, and to develop a model for sustainable support for enterprise development within a particular cultural and economic context. Micro-finance has become big business with a range of models - from those that operate on a strictly business basis to those that come from a philanthropic base. The models used grow from a range of philosophical and cultural perspectives. Entrepreneurship training is provided around the world. Success is often measured by the number involved and the repayment rates - which are very high, largely because of the lending models used. This paper will explore the range of options available and propose a model that can be implemented and evaluated in rapidly changing developing economies. Methodology/Key Propositions The research draws on entrepreneurial and micro-finance literature and empirical research undertaken in Mozambique, which lies along the Indian ocean sea border of Southern Africa. As a result of war and natural disasters over a prolonged period, there is little industry, primary industries are primitive and there is virtually no infrastructure. Mozambique is ranked as one of the poorest countries in the world. The conditions in Mozambique, though not identical, reflect conditions in many other parts of Africa. A numebr of key elements in the development of enterprises in poor countries are explored including: Impact of micro-finance Sustainable models of micro-finance Education and training Capacity building Support mechanisms Impact on poverty, families and the local economy Survival entrepreneurship versus growth entrepreneurship Transitions to the formal sector. Results and Implications The result of this study is the development of a model for providing intellectual and financial resources to micro-entrepreneurs in poor developing countries in a sustainable way. The model provides a base for ongoing research into the process of entrepreneurial growth in African developing economies. The research raises a numeber of issues regarding sustainability including the nature of the donor/recipient relationship, access to affordable resources, the impact of individual entrepreneurial activity on the local economny and the need for ongoing research to understand the whole process and its impact, intended and unintended.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
This project is an extension of a previous CRC project (220-059-B) which developed a program for life prediction of gutters in Queensland schools. A number of sources of information on service life of metallic building components were formed into databases linked to a Case-Based Reasoning Engine which extracted relevant cases from each source.
Resumo:
This project is an extension of a previous CRC project (220-059-B) which developed a program for life prediction of gutters in Queensland schools. A number of sources of information on service life of metallic building components were formed into databases linked to a Case-Based Reasoning Engine which extracted relevant cases from each source.