103 resultados para Set of Weak Stationary Dynamic Actions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Columns are one of the key load bearing elements that are highly susceptible to vehicle impacts. The resulting severe damages to columns may leads to failures of the supporting structure that are catastrophic in nature. However, the columns in existing structures are seldom designed for impact due to inadequacies of design guidelines. The impact behaviour of columns designed for gravity loads and actions other than impact is, therefore, of an interest. A comprehensive investigation is conducted on reinforced concrete column with a particular focus on investigating the vulnerability of the exposed columns and to implement mitigation techniques under low to medium velocity car and truck impacts. The investigation is based on non-linear explicit computer simulations of impacted columns followed by a comprehensive validation process. The impact is simulated using force pulses generated from full scale vehicle impact tests. A material model capable of simulating triaxial loading conditions is used in the analyses. Circular columns adequate in capacity for five to twenty story buildings, designed according to Australian standards are considered in the investigation. The crucial parameters associated with the routine column designs and the different load combinations applied at the serviceability stage on the typical columns are considered in detail. Axially loaded columns are examined at the initial stage and the investigation is extended to analyse the impact behaviour under single axis bending and biaxial bending. The impact capacity reduction under varying axial loads is also investigated. Effects of the various load combinations are quantified and residual capacity of the impacted columns based on the status of the damage and mitigation techniques are also presented. In addition, the contribution of the individual parameter to the failure load is scrutinized and analytical equations are developed to identify the critical impulses in terms of the geometrical and material properties of the impacted column. In particular, an innovative technique was developed and introduced to improve the accuracy of the equations where the other techniques are failed due to the shape of the error distribution. Above all, the equations can be used to quantify the critical impulse for three consecutive points (load combinations) located on the interaction diagram for one particular column. Consequently, linear interpolation can be used to quantify the critical impulse for the loading points that are located in-between on the interaction diagram. Having provided a known force and impulse pair for an average impact duration, this method can be extended to assess the vulnerability of columns for a general vehicle population based on an analytical method that can be used to quantify the critical peak forces under different impact durations. Therefore the contribution of this research is not only limited to produce simplified yet rational design guidelines and equations, but also provides a comprehensive solution to quantify the impact capacity while delivering new insight to the scientific community for dealing with impacts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis reports on a study in which research participants, four mature aged females starting an undergraduate degree at a regional Australian university, collaborated with the researcher in co-constructing a self-efficacy narrative. For the purpose of the study, self-efficacy was conceptualized as a means by which an individual initiates action to engage in a task or set of tasks, applies effort to perform the task or set of tasks, and persists in the face of obstacles encountered in order to achieve successful completion of the task or set of tasks. Qualitative interviews were conducted with the participants, initially investigating their respective life histories for an understanding of how they made the decision to embark on their respective academic program. Additional data were generated from a written exercise, prompting participants to furnish specific examples of self-efficacy. These data were incorporated into the individual's self-efficacy narrative, produced as the outcome of the "narrative analysis". Another aspect of the study entailed "analysis of narrative" in which analytic procedures were used to identify themes common to the self-efficacy narratives. Five main themes were identified: (a) participants' experience of schooling . for several participants their formative experience of school was not always positive, and yet their narratives demonstrated their agency in persevering and taking on university-level studies as mature aged persons; (b) recognition of family as an early influence . these influences were described as being both positive, in the sense of being supportive and encouraging, as well as posing obstacles that participants had to overcome in order to pursue their goals; (c) availability of supportive persons – the support of particular persons was acknowledged as a factor that enabled participants to persist in their respective endeavours; (d) luck or chance factors were recognised as placing participants at the right place at the right time, from which circumstances they applied considerable effort in order to convert the opportunity into a successful outcome; and (e) self-efficacy was identified as a major theme found in the narratives. The study included an evaluation of the research process by participants. A number of themes were identified in respect of the manner in which the research process was experienced as a helpful process. Participants commented that: (a) the research process was helpful in clarifying their respective career goals; (b) they appreciated opportunities provided by the research process to view their life from a different perspective and to better understand what motivated them, and what their preferred learning styles were; (c) their past successes in a range of different spheres were made more evident to them as they were guided in self-reflection, and their self-efficacious behaviour was affirmed; and (d) the opportunities provided by their participation in the research process to identify strengths of which they had not been consciously aware, to find confirmation of strengths they knew they possessed, and in some instances to rectify misconceptions they had held about aspects of their personality. The study made three important contributions to knowledge. Firstly, it provided a detailed explication of a qualitative narrative method in exploring self-efficacy, with the potential for application to other issues in educational, counselling and psychotherapy research. Secondly, it consolidated and illustrated social cognitive theory by proposing a dynamic model of self-efficacy, drawing on constructivist and interpretivist paradigms and extending extant theory and models. Finally, the study made a contribution to the debate concerning the nexus of qualitative research and counselling by providing guidelines for ethical practice in both endeavours for the practitioner-researcher.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the hardware development and testing of a new concept for air sampling via the integration of a prototype spore trap onboard an unmanned aerial system (UAS).We propose the integration of a prototype spore trap onboard a UAS to allow multiple capture of spores of pathogens in single remote locations at high or low altitude, otherwise not possible with stationary sampling devices.We also demonstrate the capability of this system for the capture of multiple time-stamped samples during a single mission.Wind tunnel testing was followed by simulation, and flight testing was conducted to measure and quantify the spread during simulated airborne air sampling operations. During autonomous operations, the onboard autopilot commands the servo to rotate the sampling device to a new indexed location once the UAS vehicle reaches the predefined waypoint or set of waypoints (which represents the region of interest). Time-stamped UAS data are continuously logged during the flight to assist with analysis of the particles collected. Testing and validation of the autopilot and spore trap integration, functionality, and performance is described. These tools may enhance the ability to detect new incursions of spores

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technologies and languages for integrated processes are a relatively recent innovation. Over that period many divergent waves of innovation have transformed process integration. Like sockets and distributed objects, early workflow systems ordered programming interfaces that connected the process modelling layer to any middleware. BPM systems emerged later, connecting the modelling world to middleware through components. While BPM systems increased ease of use (modelling convenience), long-standing and complex interactions involving many process instances remained di±cult to model. Enterprise Service Buses (ESBs), followed, connecting process models to heterogeneous forms of middleware. ESBs, however, generally forced modellers to choose a particular underlying middleware and to stick to it, despite their ability to connect with many forms of middleware. Furthermore ESBs encourage process integrations to be modelled on their own, logically separate from the process model. This can lead to the inability to reason about long standing conversations at the process layer. Technologies and languages for process integration generally lack formality. This has led to arbitrariness in the underlying language building blocks. Conceptual holes exist in a range of technologies and languages for process integration and this can lead to customer dissatisfaction and failure to bring integration projects to reach their potential. Standards for process integration share similar fundamental flaws to languages and technologies. Standards are also in direct competition with other standards causing a lack of clarity. Thus the area of greatest risk in a BPM project remains process integration, despite major advancements in the technology base. This research examines some fundamental aspects of communication middleware and how these fundamental building blocks of integration can be brought to the process modelling layer in a technology agnostic manner. This way process modelling can be conceptually complete without becoming stuck in a particular middleware technology. Coloured Petri nets are used to define a formal semantics for the fundamental aspects of communication middleware. They provide the means to define and model the dynamic aspects of various integration middleware. Process integration patterns are used as a tool to codify common problems to be solved. Object Role Modelling is a formal modelling technique that was used to define the syntax of a proposed process integration language. This thesis provides several contributions to the field of process integration. It proposes a framework defining the key notions of integration middleware. This framework provides a conceptual foundation upon which a process integration language could be built. The thesis defines an architecture that allows various forms of middleware to be aggregated and reasoned about at the process layer. This thesis provides a comprehensive set of process integration patterns. These constitute a benchmark for the kinds of problems a process integration language must support. The thesis proposes a process integration modelling language and a partial implementation that is able to enact the language. A process integration pilot project in a German hospital is brie°y described at the end of the thesis. The pilot is based on ideas in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power system dynamic analysis and security assessment are becoming more significant today due to increases in size and complexity from restructuring, emerging new uncertainties, integration of renewable energy sources, distributed generation, and micro grids. Precise modelling of all contributed elements/devices, understanding interactions in detail, and observing hidden dynamics using existing analysis tools/theorems are difficult, and even impossible. In this chapter, the power system is considered as a continuum and the propagated electomechanical waves initiated by faults and other random events are studied to provide a new scheme for stability investigation of a large dimensional system. For this purpose, the measured electrical indices (such as rotor angle and bus voltage) following a fault in different points among the network are used, and the behaviour of the propagated waves through the lines, nodes, and buses is analyzed. The impact of weak transmission links on a progressive electromechanical wave using energy function concept is addressed. It is also emphasized that determining severity of a disturbance/contingency accurately, without considering the related electromechanical waves, hidden dynamics, and their properties is not secure enough. Considering these phenomena takes heavy and time consuming calculation, which is not suitable for online stability assessment problems. However, using a continuum model for a power system reduces the burden of complex calculations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the formulation for the free vibration of joined conical-cylindrical shells with uniform thickness using the transfer of influence coefficient for identification of structural characteristics. These characteristics are importance for structural health monitoring to develop model. This method was developed based on successive transmission of dynamic influence coefficients, which were defined as the relationships between the displacement and the force vectors at arbitrary nodal circles of the system. The two edges of the shell having arbitrary boundary conditions are supported by several elastic springs with meridional/axial, circumferential, radial and rotational stiffness, respectively. The governing equations of vibration of a conical shell, including a cylindrical shell, are written as a coupled set of first order differential equations by using the transfer matrix of the shell. Once the transfer matrix of a single component has been determined, the entire structure matrix is obtained by the product of each component matrix and the joining matrix. The natural frequencies and the modes of vibration were calculated numerically for joined conical-cylindrical shells. The validity of the present method is demonstrated through simple numerical examples, and through comparison with the results of previous researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The three-component reaction-diffusion system introduced in [C. P. Schenk et al., Phys. Rev. Lett., 78 (1997), pp. 3781–3784] has become a paradigm model in pattern formation. It exhibits a rich variety of dynamics of fronts, pulses, and spots. The front and pulse interactions range in type from weak, in which the localized structures interact only through their exponentially small tails, to strong interactions, in which they annihilate or collide and in which all components are far from equilibrium in the domains between the localized structures. Intermediate to these two extremes sits the semistrong interaction regime, in which the activator component of the front is near equilibrium in the intervals between adjacent fronts but both inhibitor components are far from equilibrium there, and hence their concentration profiles drive the front evolution. In this paper, we focus on dynamically evolving N-front solutions in the semistrong regime. The primary result is use of a renormalization group method to rigorously derive the system of N coupled ODEs that governs the positions of the fronts. The operators associated with the linearization about the N-front solutions have N small eigenvalues, and the N-front solutions may be decomposed into a component in the space spanned by the associated eigenfunctions and a component projected onto the complement of this space. This decomposition is carried out iteratively at a sequence of times. The former projections yield the ODEs for the front positions, while the latter projections are associated with remainders that we show stay small in a suitable norm during each iteration of the renormalization group method. Our results also help extend the application of the renormalization group method from the weak interaction regime for which it was initially developed to the semistrong interaction regime. The second set of results that we present is a detailed analysis of this system of ODEs, providing a classification of the possible front interactions in the cases of $N=1,2,3,4$, as well as how front solutions interact with the stationary pulse solutions studied earlier in [A. Doelman, P. van Heijster, and T. J. Kaper, J. Dynam. Differential Equations, 21 (2009), pp. 73–115; P. van Heijster, A. Doelman, and T. J. Kaper, Phys. D, 237 (2008), pp. 3335–3368]. Moreover, we present some results on the general case of N-front interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The convergence of corporate social responsibility (CSR) and corporate governance (CG) has changed the corporate accountability mechanism. This has developed a socially responsible ‘corporate self-regulation’, a synthesis of governance and responsibility in the companies of strong economies. However, unlike in the strong economies, this convergence has not been visible in the companies of weak economies, where the civil society groups are unorganised, regulatory agencies are either ineffective or corrupt and the media and non-governmental organisations do not mirror the corporate conscience. Using the case of Bangladesh, this article investigates the convergence between CSR and CG in the self-regulation of companies in a less vigilant environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Eccentric exercise (EE) is a commonly used treatment for Achilles tendinopathy. While vibrations in the 8–12 Hz frequency range generated during eccentric muscle actions have been put forward as a potential mechanism for the beneficial effect of EE, optimal loading parameters required to expedite recovery are currently unknown. Alfredson's original protocol employed 90 repetitions of eccentric loading, however abbreviated protocols consisting of fewer repetitions (typically 45) have been developed, albeit with less beneficial effect. Given that 8–12 Hz vibrations generated during isometric muscle actions have been previously shown to increase with fatigue, this research evaluated the effect of exercise repetition on motor output vibrations generated during EE by investigating the frequency characteristics of ground reaction force (GRF) recorded throughout the 90 repetitions of Alfredson's protocol. Methods: Nine healthy adult males performed six sets (15 repetitions per set) of eccentric ankle exercise. GRF was recorded at a frequency of 1000 Hz throughout the exercise protocol. The frequency power spectrum of the resultant GRF was calculated and normalized to total power. Relative spectral power was summed over 1 Hz widows within the frequency rage 7.5–11.5 Hz. The effect of each additional exercise set (15 repetitions) on the relative power within each widow was investigated using a general linear modelling approach. Results: The magnitude of peak relative power within the 7.5–11.5 Hz bandwidth increased across the six exercise sets from 0.03 in exercise set one to 0.12 in exercise set six (P < 0.05). Following the 4th set of exercise the frequency at which peak relative power occurred shifted from 9 to 10 Hz. Discussion: This study has demonstrated that successive repetitions of eccentric loading over six exercise sets results in an increase in the amplitude of motor output vibrations in the 7.5–11.5 Hz bandwidth, with an increase in the frequency of these vibrations occurring after the 4th set (60th repetition). These findings are consistent with findings from previous studies of muscle fatigue. Assuming that the magnitude and frequency of these vibrations represent important stimuli for tendon remodelling as hypothesized within the literature, the findings of this study question the role of abbreviated EE protocols and raise the question; can EE protocols for tendinopathy be optimized by performing eccentric loading to fatigue?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The natural and built environment has been shown to affect its users in both psychological and physiological forms. But can if affect the sociological aspects of human processes and actions? The activation of the public realm can be shown to reduce socially dysfunctional behaviour through the simple occupation of the space and a number of other key variables through its design. In order to explore this further we must study how public space is being used in terms of social interaction, which will lead to a set of design ideals through which the social activation of public space can be achieved. Observations of differing social contexts have been undertaken in order to solidify key ideas and design principles for the activation of public space. Three sites were selected, each containing different amounts of vegetation and opportunity for occupation. These were then analysed through a lens of levels of social interaction. In this way it can become evident how the users interact with and within their social environments Through the analysis of the chosen sites, it has become evident that levels of interaction between the users, whether for transitory or occupational purposes, rise directly with vegetation and opportunity for occupation. With this in mind it can be determined that through design, public space can allow for and create greater opportunity for social interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is a qualitative study aimed at better capturing the complexity of conflict in family businesses. An inductive content analysis revealed two important issues: the dynamics of intergenerational conflicts and the escalation process of conflicts. The results demonstrated that conflicts are more likely to be intergenerational than intra-generational due to the role of senior members in daily business operations, generational differences, and a perception gap that exist between generations concerning each other’s competencies in doing the business. Furthermore, the set of factors contributing to conflict escalation is related to how family members handle the conflict, how they manage their emotions, and how they are able to avoid non-family employee involvement. These findings provide a foundation for taking preventative actions, implementing strategies for managing conflicts or devising effective solutions for resolving conflicts before they become more destructive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and purpose: The purpose of the work presented in this paper was to determine whether patient positioning and delivery errors could be detected using electronic portal images of intensity modulated radiotherapy (IMRT). Patients and methods: We carried out a series of controlled experiments delivering an IMRT beam to a humanoid phantom using both the dynamic and multiple static field method of delivery. The beams were imaged, the images calibrated to remove the IMRT fluence variation and then compared with calibrated images of the reference beams without any delivery or position errors. The first set of experiments involved translating the position of the phantom both laterally and in a superior/inferior direction a distance of 1, 2, 5 and 10 mm. The phantom was also rotated 1 and 28. For the second set of measurements the phantom position was kept fixed and delivery errors were introduced to the beam. The delivery errors took the form of leaf position and segment intensity errors. Results: The method was able to detect shifts in the phantom position of 1 mm, leaf position errors of 2 mm, and dosimetry errors of 10% on a single segment of a 15 segment IMRT step and shoot delivery (significantly less than 1% of the total dose). Conclusions: The results of this work have shown that the method of imaging the IMRT beam and calibrating the images to remove the intensity modulations could be a useful tool in verifying both the patient position and the delivery of the beam.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.