211 resultados para imaging gene X environment :framework
Resumo:
The Texas Department of Transportation (TxDOT) is concerned about the widening gap between preservation needs and available funding. Funding levels are not adequate to meet the preservation needs of the roadway network; therefore projects listed in the 4-Year Pavement Management Plan must be ranked to determine which projects should be funded now and which can be postponed until a later year. Currently, each district uses locally developed methods to prioritize projects. These ranking methods have relied on less formal qualitative assessments based on engineers’ subjective judgment. It is important for TxDOT to have a 4-Year Pavement Management Plan that uses a transparent, rational project ranking process. The objective of this study is to develop a conceptual framework that describes the development of the 4-Year Pavement Management Plan. It can be largely divided into three Steps; 1) Network-Level project screening process, 2) Project-Level project ranking process, and 3) Economic Analysis. A rational pavement management procedure and a project ranking method accepted by districts and the TxDOT administration will maximize efficiency in budget allocations and will potentially help improve pavement condition. As a part of the implementation of the 4-Year Pavement Management Plan, the Network-Level Project Screening (NLPS) tool including the candidate project identification algorithm and the preliminary project ranking matrix was developed. The NLPS has been used by the Austin District Pavement Engineer (DPE) to evaluate PMIS (Pavement Management Information System) data and to prepare a preliminary list of candidate projects for further evaluation.
Resumo:
The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics.
Resumo:
The purpose of this paper is to suggest a framework to study the influence of 'Work motivation' on 'Project success' from a team members' perspective. Results from a literature review of 1345 articles on ' Project success' and 1063 articles of 'Work motivation' appearing in peer-reviewed journals between 1985 and 2005 are presented. We then propose a framework to study the impact of 'Work motivation' on 'Project success' by incorporating the key constructs pertaining to these variables derived from the literature review.
Resumo:
This report discusses findings of a case study into "Road Construction Safety" undertaken as a part of the retrospective analysis component of Sustainable Built Environment National Research Centre (SBEnrc) Project 2.7 Leveraging R&D investment for the Australian Built Environment. The Queensland Department of Transport and Main Roads (QTMR) has taken a leadership role in developing a safer working environment for road construction workers. In the past decades, a range of initiatives have been introduced to contribute to improved performance in this area. Several initiatives have been undertaken by QTMR as part of their overarching commitment to safety. Three such initiatives form the basis for this case study investigation, in order to better illustrate the nature of R&D investment and its impact on day-to-day operations and the supply chain. These are the development and implementation of: (i) the Mechanical Traffic Aid: (ii) the Thermal Imaging Camera; and (iii) the Trailer-based CCTV (camera). This case study should be read in conjunction with Part 1 of this suite of reports.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
The next generation of SOA needs to scale for flexible service consumption, beyond organizational boundaries and current B2B applications, into communities, eco-systems, and business networks. In the wider and, ultimately, global settings, new capabilities are needed so that business partners can efficiently and reliably enable, adapt, and expose services where they can be discovered, ordered, consumed, metered, and paid for, through new applications and opportunities, driven by third parties in the global "village". This trend is already underway, in different ways, through various early adopter market segments. For the small medium enterprises segment, Google, Intuit-Microsoft, and others have launched appstores, through which an open-ended array of hosted applications are sourced from the development community and procured as maketplace commondities. In the corporate sector, the marketplace model and business network hubs are being put in place on top of connectivity and network orchestration investments for capitalizing services as tradable assets, seen in banking/finance (e.g. American Express Intelligent Marketplace), logistics (e.g., the E2open hub), and the public sector (e.g., UK DirectGov whole-of-government citizen services delivery).
Resumo:
The emergence of semantic technologies to deal with the underlying meaning of things, instead of a purely syntactical representation, has led to new developments in various fields, including business process modeling. Inspired by artificial intelligence research, technologies for semantic Web services have been proposed and extended to process modeling. However, the applicablility of semantic Web services for semantic business processes is limited because business processes encompass wider requirements of business than Web services. In particular, processes are concerned with the composition of tasks, that is, in which order activities are carried out, regardless of their implementation details; resources assigned to carry out tasks, such as machinery, people, and goods; data exchange; and security and compliance concerns.
Resumo:
A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.
Resumo:
Design-build (DB) is a generic form of construction procurement, and, rather than simply representing a single system, it has evolved in practice into a variety of forms, each of which is similar to, and yet different from each other. Although the importance of selecting an appropriate DB variant has been widely accepted, difficulties occur in practice due to the multiplicity of terms and concepts used. What is needed is some kind of taxonomy or framework within which the individual variants can be placed and their relative attributes identified and understood. Through a comprehensive literature review and content analysis, this paper establishes a systematic classification framework for DB variants based on their operational attributes. In addition to providing much needed support for decision-making, this classification framework provides client/owners with perspectives to understand and examine different categories of DB variants from an operational perspective.
Resumo:
This thesis investigates the radically uncertain formal, business, and industrial environment of current entertainment creators. It researches how a novel communication technology, the Internet, leads to novel entertainment forms, how these lead to novel kinds of businesses that lead to novel industries; and in what way established entertainment forms, businesses, and industries are part of that process. This last aspect is addressed by focusing on one exemplary es-tablished form: movies. Using a transdisciplinary approach and a combination of historical analysis, industry interviews, and an innovative mode of ‘immersive’ textual analysis, a coherent and comprehensive conceptual framework for the creation of and re-search into a specific emerging entertainment form is proposed. That form, products based on it, and the conceptual framework describing it are all re-ferred to as Entertainment Architecture (‘entarch,’ for short). The thesis charac-terises this novel form as Internet-native transmedia entertainment, meaning it fully utilises the unique communicative characteristics of the Internet, and is spread across media. The thesis isolates four constitutive elements within Entertainment Architec-ture: story, play, ‘dance,’ and ‘glue.’ That is, entarch tells a story; offers playful interaction; invites social interaction between producer and consumer, and amongst consumers (‘dance’); and all components of it can be spread across many media, but are so well interconnected and mutually dependent that they are perceived as one product instead of many (‘glue’). This sets entarch apart from current media franchises like Star Wars or Halo, which are perceived as many products spread across many media. Entarch thus embraces the commu-nicative behaviour of Internet-native consumers instead of forcing them to de-sist from it, it harnesses the strengths of various media while avoiding some of their weaknesses, and it can sustain viable businesses. The entarch framework is an innovative contribution to scholarship that al-lows researchers to investigate this emerging entertainment form in a structured way. The thesis demonstrates this by using it to survey business models appro-priate to the entarch environment. The framework can also be used by enter-tainment creators — exemplified in the thesis by moviemakers — to delimit the room for manoeuvre available to them in a changing environment.
Resumo:
In the last few decades, the focus on building healthy communities has grown significantly (Ashton, 2009). There is growing evidence that new approaches to planning are required to address the challenges faced by contemporary communities. These approaches need to be based on timely access to local information and collaborative planning processes (Murray, 2006; Scotch & Parmanto, 2006; Ashton, 2009; Kazda et al., 2009). However, there is little research to inform the methods that can support this type of responsive, local, collaborative and consultative health planning (Northridge et al., 2003). Some research justifies the use of decision support systems (DSS) as a tool to support planning for healthy communities. DSS have been found to increase collaboration between stakeholders and communities, improve the accuracy and quality of the decision-making process, and improve the availability of data and information for health decision-makers (Nobre et al., 1997; Cromley & McLafferty, 2002; Waring et al., 2005). Geographic information systems (GIS) have been suggested as an innovative method by which to implement DSS because they promote new ways of thinking about evidence and facilitate a broader understanding of communities. Furthermore, literature has indicated that online environments can have a positive impact on decision-making by enabling access to information by a broader audience (Kingston et al., 2001). However, only limited research has examined the implementation and impact of online DSS in the health planning field. Previous studies have emphasised the lack of effective information management systems and an absence of frameworks to guide the way in which information is used to promote informed decisions in health planning. It has become imperative to develop innovative approaches, frameworks and methods to support health planning. Thus, to address these identified gaps in the knowledge, this study aims to develop a conceptual planning framework for creating healthy communities and examine the impact of DSS in the Logan Beaudesert area. Specifically, the study aims to identify the key elements and domains of information that are needed to develop healthy communities, to develop a conceptual planning framework for creating healthy communities, to collaboratively develop and implement an online GIS-based Health DSS (i.e., HDSS), and to examine the impact of the HDSS on local decision-making processes. The study is based on a real-world case study of a community-based initiative that was established to improve public health outcomes and promote new ways of addressing chronic disease. The study involved the development of an online GIS-based health decision support system (HDSS), which was applied in the Logan Beaudesert region of Queensland, Australia. A planning framework was developed to account for the way in which information could be organised to contribute to a healthy community. The decision support system was developed within a unique settings-based initiative Logan Beaudesert Health Coalition (LBHC) designed to plan and improve the health capacity of Logan Beaudesert area in Queensland, Australia. This setting provided a suitable platform to apply a participatory research design to the development and implementation of the HDSS. Therefore, the HDSS was a pilot study examined the impact of this collaborative process, and the subsequent implementation of the HDSS on the way decision-making was perceived across the LBHC. As for the method, based on a systematic literature review, a comprehensive planning framework for creating healthy communities has been developed. This was followed by using a mixed method design, data were collected through both qualitative and quantitative methods. Specifically, data were collected by adopting a participatory action research (PAR) approach (i.e., PAR intervention) that informed the development and conceptualisation of the HDSS. A pre- and post-design was then used to determine the impact of the HDSS on decision-making. The findings of this study revealed a meaningful framework for organising information to guide planning for healthy communities. This conceptual framework provided a comprehensive system within which to organise existing data. The PAR process was useful in engaging stakeholders and decision-making in the development and implementation of the online GIS-based DSS. Through three PAR cycles, this study resulted in heightened awareness of online GIS-based DSS and openness to its implementation. It resulted in the development of a tailored system (i.e., HDSS) that addressed the local information and planning needs of the LBHC. In addition, the implementation of the DSS resulted in improved decision- making and greater satisfaction with decisions within the LBHC. For example, the study illustrated the culture in which decisions were made before and after the PAR intervention and what improvements have been observed after the application of the HDSS. In general, the findings indicated that decision-making processes are not merely informed (consequent of using the HDSS tool), but they also enhance the overall sense of ‗collaboration‘ in the health planning practice. For example, it was found that PAR intervention had a positive impact on the way decisions were made. The study revealed important features of the HDSS development and implementation process that will contribute to future research. Thus, the overall findings suggest that the HDSS is an effective tool, which would play an important role in the future for significantly improving the health planning practice.
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
Information communication and technology (ICT) systems are almost ubiquitous in the modern world. It is hard to identify any industry, or for that matter any part of society, that is not in some way dependent on these systems and their continued secure operation. Therefore the security of information infrastructures, both on an organisational and societal level, is of critical importance. Information security risk assessment is an essential part of ensuring that these systems are appropriately protected and positioned to deal with a rapidly changing threat environment. The complexity of these systems and their inter-dependencies however, introduces a similar complexity to the information security risk assessment task. This complexity suggests that information security risk assessment cannot, optimally, be undertaken manually. Information security risk assessment for individual components of the information infrastructure can be aided by the use of a software tool, a type of simulation, which concentrates on modelling failure rather than normal operational simulation. Avoiding the modelling of the operational system will once again reduce the level of complexity of the assessment task. The use of such a tool provides the opportunity to reuse information in many different ways by developing a repository of relevant information to aid in both risk assessment and management and governance and compliance activities. Widespread use of such a tool allows the opportunity for the risk models developed for individual information infrastructure components to be connected in order to develop a model of information security exposures across the entire information infrastructure. In this thesis conceptual and practical aspects of risk and its underlying epistemology are analysed to produce a model suitable for application to information security risk assessment. Based on this work prototype software has been developed to explore these concepts for information security risk assessment. Initial work has been carried out to investigate the use of this software for information security compliance and governance activities. Finally, an initial concept for extending the use of this approach across an information infrastructure is presented.