962 resultados para Multiple abstraction levels


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstraction plays an essential role in the way the agents plan their behaviours, especially to reduce the computational complexity of planning in large domains. However, the effects of abstraction in the inverse process – plan recognition – are unclear. In this paper, we present a method for recognising the agent’s behaviour in noisy and uncertain domains, and across multiple levels of abstraction. We use the concept of abstract Markov policies in abstract probabilistic planning as the model of the agent’s behaviours and employ probabilistic inference in Dynamic Bayesian Networks (DBN) to infer the correct policy from a sequence of observations. When the states are fully observable, we show that for a broad and often-used class of abstract policies, the complexity of policy recognition scales well with the number of abstraction levels in the policy hierarchy. For the partially observable case, we derive an efficient hybrid inference scheme on the corresponding DBN to overcome the exponential complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquatic ecosystems are confronted with multiple stress factors. Current approaches to assess the risk of anthropogenic stressors to aquatic ecosystems are developed for single stressors and determine stressor effects primarily as a function of stressor properties. The cumulative impact of several stressors, however, may differ markedly from the impact of the single stressors and can result in nonlinear effects and ecological surprises. To meet the challenge of diagnosing and predicting multiple stressor impacts, assessment strategies should focus on properties of the biological receptors rather than on stressor properties. This change of paradigm is required because (i) multiple stressors affect multiple biological targets at multiple organizational levels, (ii) biological receptors differ in their sensitivities, vulnerabilities, and response dynamics to the individual stressors, and (iii) biological receptors function as networks, so that actions of stressors at disparate sites within the network can lead via indirect or cascading effects, to unexpected outcomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent future, wireless sensor networks (WSNs) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of WSNs facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers (DCs). The high economical and environmental impact of the energy consumption in DCs requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of WSNs: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of DCs: energy-optimal workload assignment policies in heterogeneous DCs, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent future, wireless sensor networks ({WSNs}) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of {WSNs} facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers ({DCs}). The high economical and environmental impact of the energy consumption in {DCs} requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of {WSNs}: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of {DCs}: energy-optimal workload assignment policies in heterogeneous {DCs}, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Classification is the most basic method for organizing resources in the physical space, cyber space, socio space and mental space. To create a unified model that can effectively manage resources in different spaces is a challenge. The Resource Space Model RSM is to manage versatile resources with a multi-dimensional classification space. It supports generalization and specialization on multi-dimensional classifications. This paper introduces the basic concepts of RSM, and proposes the Probabilistic Resource Space Model, P-RSM, to deal with uncertainty in managing various resources in different spaces of the cyber-physical society. P-RSM’s normal forms, operations and integrity constraints are developed to support effective management of the resource space. Characteristics of the P-RSM are analyzed through experiments. This model also enables various services to be described, discovered and composed from multiple dimensions and abstraction levels with normal form and integrity guarantees. Some extensions and applications of the P-RSM are introduced.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In an overcapacity world, where the customers can choose from many similar products to satisfy their needs, enterprises are looking for new approaches and tools that can help them not only to maintain, but also to increase their competitive edge. Innovation, flexibility, quality, and service excellence are required to, at the very least, survive the on-going transition that industry is experiencing from mass production to mass customization. In order to help these enterprises, this research develops a Supply Chain Capability Maturity Model named S(CM)2. The Supply Chain Capability Maturity Model is intended to model, analyze, and improve the supply chain management operations of an enterprise. The Supply Chain Capability Maturity Model provides a clear roadmap for enterprise improvement, covering multiple views and abstraction levels of the supply chain, and provides tools to aid the firm in making improvements. The principal research tool applied is the Delphi method, which systematically gathered the knowledge and experience of eighty eight experts in Mexico. The model is validated using a case study and interviews with experts in supply chain management. The resulting contribution is a holistic model of the supply chain integrating multiple perspectives, and providing a systematic procedure for the improvement of a company’s supply chain operations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In an overcapacity world, where the customers can choose from many similar products to satisfy their needs, enterprises are looking for new approaches and tools that can help them not only to maintain, but also to increase their competitive edge. Innovation, flexibility, quality, and service excellence are required to, at the very least, survive the on-going transition that industry is experiencing from mass production to mass customization. In order to help these enterprises, this research develops a Supply Chain Capability Maturity Model named S(CM)2. The Supply Chain Capability Maturity Model is intended to model, analyze, and improve the supply chain management operations of an enterprise. The Supply Chain Capability Maturity Model provides a clear roadmap for enterprise improvement, covering multiple views and abstraction levels of the supply chain, and provides tools to aid the firm in making improvements. The principal research tool applied is the Delphi method, which systematically gathered the knowledge and experience of eighty eight experts in Mexico. The model is validated using a case study and interviews with experts in supply chain management. The resulting contribution is a holistic model of the supply chain integrating multiple perspectives, and providing a systematic procedure for the improvement of a company’s supply chain operations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of theses for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of ten published /submitted papers and book chapters of which nine have been published and one is under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of investigating multilevel topologies for high quality and high power applications, with specific emphasis on renewable energy systems. The rapid evolution of renewable energy within the last several years has resulted in the design of efficient power converters suitable for medium and high-power applications such as wind turbine and photovoltaic (PV) systems. Today, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements controlled by powerful processor systems. However, it is hard to connect the traditional converters to the high and medium voltage grids, as a single power switch cannot stand at high voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Besides this important feature, multilevel converters have the capability to generate stepped waveforms. Consequently, in comparison with conventional two-level inverters, they present lower switching losses, lower voltage stress across loads, lower electromagnetic interference (EMI) and higher quality output waveforms. These properties enable the connection of renewable energy sources directly to the grid without using expensive, bulky, heavy line transformers. Additionally, they minimize the size of the passive filter and increase the durability of electrical devices. However, multilevel converters have only been utilised in very particular applications, mainly due to the structural limitations, high cost and complexity of the multilevel converter system and control. New developments in the fields of power semiconductor switches and processors will favor the multilevel converters for many other fields of application. The main application for the multilevel converter presented in this work is the front-end power converter in renewable energy systems. Diode-clamped and cascade converters are the most common type of multilevel converters widely used in different renewable energy system applications. However, some drawbacks – such as capacitor voltage imbalance, number of components, and complexity of the control system – still exist, and these are investigated in the framework of this thesis. Various simulations using software simulation tools are undertaken and are used to study different cases. The feasibility of the developments is underlined with a series of experimental results. This thesis is divided into two main sections. The first section focuses on solving the capacitor voltage imbalance for a wide range of applications, and on decreasing the complexity of the control strategy on the inverter side. The idea of using sharing switches at the output structure of the DC-DC front-end converters is proposed to balance the series DC link capacitors. A new family of multioutput DC-DC converters is proposed for renewable energy systems connected to the DC link voltage of diode-clamped converters. The main objective of this type of converter is the sharing of the total output voltage into several series voltage levels using sharing switches. This solves the problems associated with capacitor voltage imbalance in diode-clamped multilevel converters. These converters adjust the variable and unregulated DC voltage generated by renewable energy systems (such as PV) to the desirable series multiple voltage levels at the inverter DC side. A multi-output boost (MOB) converter, with one inductor and series output voltage, is presented. This converter is suitable for renewable energy systems based on diode-clamped converters because it boosts the low output voltage and provides the series capacitor at the output side. A simple control strategy using cross voltage control with internal current loop is presented to obtain the desired voltage levels at the output voltage. The proposed topology and control strategy are validated by simulation and hardware results. Using the idea of voltage sharing switches, the circuit structure of different topologies of multi-output DC-DC converters – or multi-output voltage sharing (MOVS) converters – have been proposed. In order to verify the feasibility of this topology and its application, steady state and dynamic analyses have been carried out. Simulation and experiments using the proposed control strategy have verified the mathematical analysis. The second part of this thesis addresses the second problem of multilevel converters: the need to improve their quality with minimum cost and complexity. This is related to utilising asymmetrical multilevel topologies instead of conventional multilevel converters; this can increase the quality of output waveforms with a minimum number of components. It also allows for a reduction in the cost and complexity of systems while maintaining the same output quality, or for an increase in the quality while maintaining the same cost and complexity. Therefore, the asymmetrical configuration for two common types of multilevel converters – diode-clamped and cascade converters – is investigated. Also, as well as addressing the maximisation of the output voltage resolution, some technical issues – such as adjacent switching vectors – should be taken into account in asymmetrical multilevel configurations to keep the total harmonic distortion (THD) and switching losses to a minimum. Thus, the asymmetrical diode-clamped converter is proposed. An appropriate asymmetrical DC link arrangement is presented for four-level diode-clamped converters by keeping adjacent switching vectors. In this way, five-level inverter performance is achieved for the same level of complexity of the four-level inverter. Dealing with the capacitor voltage imbalance problem in asymmetrical diodeclamped converters has inspired the proposal for two different DC-DC topologies with a suitable control strategy. A Triple-Output Boost (TOB) converter and a Boost 3-Output Voltage Sharing (Boost-3OVS) converter connected to the four-level diode-clamped converter are proposed to arrange the proposed asymmetrical DC link for the high modulation indices and unity power factor. Cascade converters have shown their abilities and strengths in medium and high power applications. Using asymmetrical H-bridge inverters, more voltage levels can be generated in output voltage with the same number of components as the symmetrical converters. The concept of cascading multilevel H-bridge cells is used to propose a fifteen-level cascade inverter using a four-level H-bridge symmetrical diode-clamped converter, cascaded with classical two-level Hbridge inverters. A DC voltage ratio of cells is presented to obtain maximum voltage levels on output voltage, with adjacent switching vectors between all possible voltage levels; this can minimize the switching losses. This structure can save five isolated DC sources and twelve switches in comparison to conventional cascade converters with series two-level H bridge inverters. To increase the quality in presented hybrid topology with minimum number of components, a new cascade inverter is verified by cascading an asymmetrical four-level H-bridge diode-clamped inverter. An inverter with nineteen-level performance was achieved. This synthesizes more voltage levels with lower voltage and current THD, rather than using a symmetrical diode-clamped inverter with the same configuration and equivalent number of power components. Two different predictive current control methods for the switching states selection are proposed to minimise either losses or THD of voltage in hybrid converters. High voltage spikes at switching time in experimental results and investigation of a diode-clamped inverter structure raised another problem associated with high-level high voltage multilevel converters. Power switching components with fast switching, combined with hard switched-converters, produce high di/dt during turn off time. Thus, stray inductance of interconnections becomes an important issue and raises overvoltage and EMI issues correlated to the number of components. Planar busbar is a good candidate to reduce interconnection inductance in high power inverters compared with cables. The effect of different transient current loops on busbar physical structure of the high-voltage highlevel diode-clamped converters is highlighted. Design considerations of proper planar busbar are also presented to optimise the overall design of diode-clamped converters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Australia’s governance arrangements for NRM have evolved considerably over the last thirty years. The impact of changes in governance on NRM planning and delivery requires assessment. We undertake a multi-method program evaluation using adaptive governance principles as an analytical frame and apply this to Queensland to assess the impacts of governance change on NRM planning and governance outcomes. Data to inform our analysis includes: 1) a systematic review of sixteen audits/evaluations of Australian NRM over a fifteen-year period; 2) a review of Queensland’s first generation NRM Plans; and 3) outputs from a Queensland workshop on NRM planning. NRM has progressed from a bottom-up grassroots movement into a collaborative regional NRM model that has been centralised by the Australian Government. We found that while some adaptive governance challenges have been addressed, others remained unresolved. Results show that collaboration and elements of multi-level governance under the regional model were positive moves, but also that NRM arrangements contained structural deficiencies across multiple governance levels in relation to public involvement in decision-making and knowledge production for problem responsiveness. These problems for adaptive governance have been exacerbated since 2008. We conclude that the adaptive governance framework for NRM needs urgent attention so that important environmental management problems can be addressed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This is a narrative about the way in which a category of crime-to-be-combated is constructed through the discipline of criminology and the agents of discipline in criminal justice. The aim was to examine organized crime through the eyes of those whose job it is to fight it (and define it), and in doing so investigate the ways social problems surface as sites for state intervention. A genealogy of organized crime within criminological thought was completed, demonstrating that there are a range of different ways organized crime has been constructed within the social scientific discipline, and each of these were influenced by the social context, political winds and intellectual climate of the time. Following this first finding, in-depth qualitative interviews were conducted with individuals who had worked at the apex of the policing of organized crime in Australia, in order to trace their understandings of organized crime across recent history. It was found that organized crime can be understood as an object of the discourse of the politics of law and order, the discourse of international securitization, new public management in policing business, and involves the forging of outlaw identities. Therefore, there are multiple meanings of organized crime that have arisen from an interconnected set of social, political, moral and bureaucratic discourses. The institutional response to organized crime, including law and policing, was subsequently examined. An extensive legislative framework has been enacted at multiple jurisdictional levels, and the problem of organized crime was found to be deserving of unique institutional powers and configurations to deal with it. The social problem of organized crime, as constituted by the discourses mapped out in this research, has led to a new generation of increasingly preemptive and punitive laws, and the creation of new state agencies with amplified powers. That is, the response to organized crime, with a focus on criminalization and enforcement, has been driven and shaped by the four discourses and the way in which the phenomenon is constructed within them. An appreciation of the nexus between the emergence of the social problem, and the formation of institutions in response to it, is important in developing a more complete understanding of the various dimensions of organized crime.