985 resultados para block model
Resumo:
A Geant4 based simulation tool has been developed to perform Monte Carlo modelling of a 6 MV VarianTM iX clinac. The computer aided design interface of Geant4 was used to accurately model the LINAC components, including the Millenium multi-leaf collimators (MLCs). The simulation tool was verified via simulation of standard commissioning dosimetry data acquired with an ionisation chamber in a water phantom. Verification of the MLC model was achieved by simulation of leaf leakage measurements performed using GafchromicTM film in a solid water phantom. An absolute dose calibration capability was added by including a virtual monitor chamber into the simulation. Furthermore, a DICOM-RT interface was integrated with the application to allow the simulation of treatment plans in radiotherapy. The ability of the simulation tool to accurately model leaf movements and doses at each control point was verified by simulation of a widely used intensity-modulated radiation therapy (IMRT) quality assurance (QA) technique, the chair test.
Resumo:
Video surveillance technology, based on Closed Circuit Television (CCTV) cameras, is one of the fastest growing markets in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. To overcome this limitation, it is necessary to have “intelligent” processes which are able to highlight the salient data and filter out normal conditions that do not pose a threat to security. In order to create such intelligent systems, an understanding of human behaviour, specifically, suspicious behaviour is required. One of the challenges in achieving this is that human behaviour can only be understood correctly in the context in which it appears. Although context has been exploited in the general computer vision domain, it has not been widely used in the automatic suspicious behaviour detection domain. So, it is essential that context has to be formulated, stored and used by the system in order to understand human behaviour. Finally, since surveillance systems could be modeled as largescale data stream systems, it is difficult to have a complete knowledge base. In this case, the systems need to not only continuously update their knowledge but also be able to retrieve the extracted information which is related to the given context. To address these issues, a context-based approach for detecting suspicious behaviour is proposed. In this approach, contextual information is exploited in order to make a better detection. The proposed approach utilises a data stream clustering algorithm in order to discover the behaviour classes and their frequency of occurrences from the incoming behaviour instances. Contextual information is then used in addition to the above information to detect suspicious behaviour. The proposed approach is able to detect observed, unobserved and contextual suspicious behaviour. Two case studies using video feeds taken from CAVIAR dataset and Z-block building, Queensland University of Technology are presented in order to test the proposed approach. From these experiments, it is shown that by using information about context, the proposed system is able to make a more accurate detection, especially those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information give critical feedback to the system designers to refine the system. Finally, the proposed modified Clustream algorithm enables the system to both continuously update the system’s knowledge and to effectively retrieve the information learned in a given context. The outcomes from this research are: (a) A context-based framework for automatic detecting suspicious behaviour which can be used by an intelligent video surveillance in making decisions; (b) A modified Clustream data stream clustering algorithm which continuously updates the system knowledge and is able to retrieve contextually related information effectively; and (c) An update-describe approach which extends the capability of the existing human local motion features called interest points based features to the data stream environment.
Resumo:
The need to develop effective and efficient training programs has been recognised by all sectors engaged in training. In responding to the above need, focus has been directed to developing good competency statements and performance indicators to measure the outcomes. Very little has been done to understand how the competency statements get translated into good performance. To conceptualise this translation process, a representational model based on an information processing paradigm is proposed and discussed. It is argued that learners’ prior knowledge and the effectiveness of the instructional material are two variables that have significant bearing on how effectively the competency knowledge is translated into outcomes. To contextualise the model examples from apprentice training are used.
Resumo:
Forest policy and forestry management in Tasmania have undergone a number of changes in the last thirty years, many explicitly aimed at improving industry sustainability, job security, and forest biodiversity conservation. Yet forestry remains a contentious issue in Tasmania, due to a number of interacting factors, most significant of which is the prevalence of a ‘command and control’ governance approach by policymakers and managers. New approaches such as multiple-stakeholder decision-making, adaptive management, and direct public participation in policymaking are needed. Such an approach has been attempted in Canada in the last decade, through the Canadian Model Forest Program, and may be suitable for Tasmania. This paper seeks to describe what the Canadian Model Forest approach is, how it may be implemented in Tasmania, and what role it may play in the shift to a new forestry paradigm. Until such a paradigm shift occurs contentions and confrontations are likely to continue.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
The functional properties of cartilaginous tissues are determined predominantly by the content, distribution, and organization of proteoglycan and collagen in the extracellular matrix. Extracellular matrix accumulates in tissue-engineered cartilage constructs by metabolism and transport of matrix molecules, processes that are modulated by physical and chemical factors. Constructs incubated under free-swelling conditions with freely permeable or highly permeable membranes exhibit symmetric surface regions of soft tissue. The variation in tissue properties with depth from the surfaces suggests the hypothesis that the transport processes mediated by the boundary conditions govern the distribution of proteoglycan in such constructs. A continuum model (DiMicco and Sah in Transport Porus Med 50:57-73, 2003) was extended to test the effects of membrane permeability and perfusion on proteoglycan accumulation in tissue-engineered cartilage. The concentrations of soluble, bound, and degraded proteoglycan were analyzed as functions of time, space, and non-dimensional parameters for several experimental configurations. The results of the model suggest that the boundary condition at the membrane surface and the rate of perfusion, described by non-dimensional parameters, are important determinants of the pattern of proteoglycan accumulation. With perfusion, the proteoglycan profile is skewed, and decreases or increases in magnitude depending on the level of flow-based stimulation. Utilization of a semi-permeable membrane with or without unidirectional flow may lead to tissues with depth-increasing proteoglycan content, resembling native articular cartilage.
Resumo:
Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.
Resumo:
This paper presents a material model to simulate load induced cracking in Reinforced Concrete (RC) elements in ABAQUS finite element package. Two numerical material models are used and combined to simulate complete stress-strain behaviour of concrete under compression and tension including damage properties. Both numerical techniques used in the present material model are capable of developing the stress-strain curves including strain softening regimes only using ultimate compressive strength of concrete, which is easily and practically obtainable for many of the existing RC structures or those to be built. Therefore, the method proposed in this paper is valuable in assessing existing RC structures in the absence of more detailed test results. The numerical models are slightly modified from the original versions to be comparable with the damaged plasticity model used in ABAQUS. The model is validated using different experiment results for RC beam elements presented in the literature. The results indicate a good agreement with load vs. displacement curve and observed crack patterns.
Resumo:
The traditional Vector Space Model (VSM) is not able to represent both the structure and the content of XML documents. This paper introduces a novel method of representing XML documents in a Tensor Space Model (TSM) and then utilizing it for clustering. Empirical analysis shows that the proposed method is scalable for large-sized datasets; as well, the factorized matrices produced from the proposed method help to improve the quality of clusters through the enriched document representation of both structure and content information.
Resumo:
This paper proposes a theoretical model for e-Government in Malaysia and addresses issues involved in its implementation. It presents three possible models including the Framework for Electronic Government (Grant & Chau, 2005), the Three Pillars Framework (Georgescu, 2007) and The Grid-Group Theory from cultural studies (Douglas, 1996) and integrates and adapts them to the specific needs of the Malaysian environment.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
This study investigates a way to systematically integrate information literacy (IL) into an undergraduate academic programme and develops a model for integrating information literacy across higher education curricula. Curricular integration of information literacy in this study means weaving information literacy into an academic curriculum. In the associated literature, it is also referred to as the information literacy embedding approach or the intra-curricular approach. The key findings identified from this study are presented in 4 categories: the characteristics of IL integration; the key stakeholders in IL integration; IL curricular design strategies; and the process of IL curricular integration. Three key characteristics of the curricular integration of IL are identified: collaboration and negotiation, contextualisation and ongoing interaction with information. The key stakeholders in the curricular integration of IL are recognised as the librarians, the course coordinators and lecturers, the heads of faculties or departments, and the students. Some strategies for IL curricular design include: the use of IL policies and standards in IL curricular design; the combination of face to face and online teaching as an emerging trend; the use of IL assessment tools which play an important role in IL integration. IL can be integrated into the intended curriculum (what an institution expects its students to learn), the offered curriculum (what the teachers teach) and the received curriculum (what students actually learn). IL integration is a process of negotiation, collaboration and the implementation of the intended curriculum. IL can be integrated at different levels of curricula such as: institutional, faculty, departmental, course and class curriculum levels. Based on these key findings, an IL curricular integration model is developed. The model integrates curriculum, pedagogy and learning theories, IL theories, IL guidelines and the collaboration of multiple partners. The model provides a practical approach to integrating IL into multiple courses across an academic degree. The development of the model was based on the IL integration experiences of various disciplines in three universities and the implementation experience of an engineering programme at another university; thus it may be of interest to other disciplines. The model has the potential to enhance IL teaching and learning, curricular development and to implement graduate attributes in higher education. Sociocultural theories are applied to the research process and IL curricular design of this study. Sociocultural theories describe learning as being embedded within social events and occurring as learners interact with other people, objects, and events in a collaborative environment. Sociocultural theories are applied to explore how academic staff and librarians experience the curricular integration of IL; they also support collaboration in the curricular integration of IL and the development of an IL integration model. This study consists of two phases. Phase I (2007) was the interview phase where both academic staff and librarians at three IL active universities were interviewed. During this phase, attention was paid specifically to the practical process of curricular integration of IL and IL activity design. Phase II, the development phase (2007-2008), was conducted at a fourth university. This phase explores the systematic integration of IL into an engineering degree from Year 1 to Year 4. Learning theories such as sociocultural theories, Bloom’s Taxonomy and IL theories are used in IL curricular development. Based on the findings from both phases, an IL integration model was developed. The findings and the model contribute to IL education, research and curricular development in higher education. The sociocultural approach adopted in this study also extends the application of sociocultural theories to the IL integration process and curricular design in higher education.
Resumo:
We present a spatiotemporal mathematical model of chlamydial infection, host immune response and spatial movement of infectious particles. The re- sulting partial differential equations model both the dynamics of the infection and changes in infection profile observed spatially along the length of the host genital tract. This model advances previous chlamydia modelling by incorporating spatial change, which we also demonstrate to be essential when the timescale for movement of infectious particles is equal to, or shorter than, the developmental cycle timescale. Numerical solutions and model analysis are carried out, and we present a hypothesis regarding the potential for treatment and prevention of infection by increasing chlamydial particle motility.
Resumo:
Distributed pipeline assets systems are crucial to society. The deterioration of these assets and the optimal allocation of limited budget for their maintenance correspond to crucial challenges for water utility managers. Decision makers should be assisted with optimal solutions to select the best maintenance plan concerning available resources and management strategies. Much research effort has been dedicated to the development of optimal strategies for maintenance of water pipes. Most of the maintenance strategies are intended for scheduling individual water pipe. Consideration of optimal group scheduling replacement jobs for groups of pipes or other linear assets has so far not received much attention in literature. It is a common practice that replacement planners select two or three pipes manually with ambiguous criteria to group into one replacement job. This is obviously not the best solution for job grouping and may not be cost effective, especially when total cost can be up to multiple million dollars. In this paper, an optimal group scheduling scheme with three decision criteria for distributed pipeline assets maintenance decision is proposed. A Maintenance Grouping Optimization (MGO) model with multiple criteria is developed. An immediate challenge of such modeling is to deal with scalability of vast combinatorial solution space. To address this issue, a modified genetic algorithm is developed together with a Judgment Matrix. This Judgment Matrix is corresponding to various combinations of pipe replacement schedules. An industrial case study based on a section of a real water distribution network was conducted to test the new model. The results of the case study show that new schedule generated a significant cost reduction compared with the schedule without grouping pipes.