908 resultados para Restricted Lie algebras
Resumo:
In recent years, unmanned aerial vehicles (UAVs) have been widely used in combat, and their potential applications in civil and commercial roles are also receiving considerable attention by industry and the research community. There are numerous published reports of UAVs used in Earth science missions [1], fire-fighting [2], and border security [3] trials, with other speculative deployments, including applications in agriculture, communications, and traffic monitoring. However, none of these UAVs can demonstrate an equivalent level of safety to manned aircraft, particularly in the case of an engine failure, which would require an emergency or forced landing. This may be arguably the main factor that has prevented these UAV trials from becoming full-scale commercial operations, as well as restricted operations of civilian UAVs to only within segregated airspace.
Resumo:
This study explores young people's creative practice through using Information and Communications Technologies (ICTs) - in one particular learning area - Drama. The study focuses on school-based contexts and the impact of ICT-based interventions within two drama education case studies. The first pilot study involved the use of online spaces to complement a co-curricula performance project. The second focus case was a curriculum-based project with online spaces and digital technologies being used to create a cyberdrama. Each case documents the activity systems, participant experiences and meaning making in specific institutional and technological contexts. The nature of creative practice and learning are analysed, using frameworks drawn from Vygotsky's socio-historical theory (including his work on creativity) and from activity theory. Case study analysis revealed the nature of contradictions encountered and these required an analysis of institutional constraints and the dynamics of power. Cyberdrama offers young people opportunities to explore drama through new modes and the use of ICTs can be seen as contributing different tools, spaces and communities for creative activity. To be able to engage in creative practice using ICTs requires a focus on a range of cultural tools and social practices beyond those of the purely technological. Cybernetic creative practice requires flexibility in the negotiation of tool use and subjects and a system that responds to feedback and can adapt. Classroom-based dramatic practice may allow for the negotiation of power and tool use in the development of collaborative works of the imagination. However, creative practice using ICTs in schools is typically restricted by authoritative power structures and access issues. The research identified participant engagement and meaning making emerging from different factors, with some students showing preferences for embodied creative practice in Drama that did not involve ICTs. The findings of the study suggest ICT-based interventions need to focus on different applications for the technology but also on embodied experience, the negotiation of power, identity and human interactions.
Resumo:
In recent months the extremes of Australia’s weather have affected, killed a good number of people and millions of dollars lost. Contrary to a manned aircraft or a helicopter; which have restricted air time, a UAS or a group of UAS could provide 24 hours coverage of the disaster area and be instrumented with infrared cameras to locate distressed people and relay information to emergency services. The solar powered UAV is capable of carrying a 0.25Kg payload consuming 0.5 watt and fly continuously for at low altitude for 24 hrs ,collect the data and create a special distribution . This system, named Green Falcon, is fully autonomous in navigation and power generation, equipped with solar cells covering its wing, it retrieves energy from the sun in order to supply power to the propulsion system and the control electronics, and charge the battery with the surplus of energy. During the night, the only energy available comes from the battery, which discharges slowly until the next morning when a new cycle starts. The prototype airplane was exhibited at the Melbourne Museum form Nov09 to Feb 2010.
Resumo:
Silhouettes are common features used by many applications in computer vision. For many of these algorithms to perform optimally, accurately segmenting the objects of interest from the background to extract the silhouettes is essential. Motion segmentation is a popular technique to segment moving objects from the background, however such algorithms can be prone to poor segmentation, particularly in noisy or low contrast conditions. In this paper, the work of [3] combining motion detection with graph cuts, is extended into two novel implementations that aim to allow greater uncertainty in the output of the motion segmentation, providing a less restricted input to the graph cut algorithm. The proposed algorithms are evaluated on a portion of the ETISEO dataset using hand segmented ground truth data, and an improvement in performance over the motion segmentation alone and the baseline system of [3] is shown.
Resumo:
Despite the general evolution and broadening of the scope of the concept of infrastructure in many other sectors, the energy sector has maintained the same narrow boundaries for over 80 years. Energy infrastructure is still generally restricted in meaning to the transmission and distribution networks of electricity and, to some extent, gas. This is especially true in the urban development context. This early 20th century system is struggling to meet community expectations that the industry itself created and fostered for many decades. The relentless growth in demand and changing political, economic and environmental challenges require a shift from the traditional ‘predict and provide’ approach to infrastructure which is no longer economically or environmentally viable. Market deregulation and a raft of demand and supply side management strategies have failed to curb society’s addiction to the commodity of electricity. None of these responses has addressed the fundamental problem. This chapter presents an argument for the need for a new paradigm. Going beyond peripheral energy efficiency measures and the substitution of fossil fuels with renewables, it outlines a new approach to the provision of energy services in the context of 21st century urban environments.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
Throughout history, developments in medicine have aimed to improve patient quality of life, and reduce the trauma associated with surgical treatment. Surgical access to internal organs and bodily structures has been traditionally via large incisions. Endoscopic surgery presents a technique for surgical access via small (1 Omm) incisions by utilising a scope and camera for visualisation of the operative site. Endoscopy presents enormous benefits for patients in terms of lower post operative discomfort, and reduced recovery and hospitalisation time. Since the first gall bladder extraction operation was performed in France in 1987, endoscopic surgery has been embraced by the international medical community. With the adoption of the new technique, new problems never previously encountered in open surgery, were revealed. One such problem is that the removal of large tissue specimens and organs is restricted by the small incision size. Instruments have been developed to address this problem however none of the devices provide a totally satisfactory solution. They have a number of critical weaknesses: -The size of the access incision has to be enlarged, thereby compromising the entire endoscopic approach to surgery. - The physical quality of the specimen extracted is very poor and is not suitable to conduct the necessary post operative pathological examinations. -The safety of both the patient and the physician is jeopardised. The problem of tissue and organ extraction at endoscopy is investigated and addressed. In addition to background information covering endoscopic surgery, this thesis describes the entire approach to the design problem, and the steps taken before arriving at the final solution. This thesis contributes to the body of knowledge associated with the development of endoscopic surgical instruments. A new product capable of extracting large tissue specimens and organs in endoscopy is the final outcome of the research.
Resumo:
Reliability analysis has several important engineering applications. Designers and operators of equipment are often interested in the probability of the equipment operating successfully to a given age - this probability is known as the equipment's reliability at that age. Reliability information is also important to those charged with maintaining an item of equipment, as it enables them to model and evaluate alternative maintenance policies for the equipment. In each case, information on failures and survivals of a typical sample of items is used to estimate the required probabilities as a function of the item's age, this process being one of many applications of the statistical techniques known as distribution fitting. In most engineering applications, the estimation procedure must deal with samples containing survivors (suspensions or censorings); this thesis focuses on several graphical estimation methods that are widely used for analysing such samples. Although these methods have been current for many years, they share a common shortcoming: none of them is continuously sensitive to changes in the ages of the suspensions, and we show that the resulting reliability estimates are therefore more pessimistic than necessary. We use a simple example to show that the existing graphical methods take no account of any service recorded by suspensions beyond their respective previous failures, and that this behaviour is inconsistent with one's intuitive expectations. In the course of this thesis, we demonstrate that the existing methods are only justified under restricted conditions. We present several improved methods and demonstrate that each of them overcomes the problem described above, while reducing to one of the existing methods where this is justified. Each of the improved methods thus provides a realistic set of reliability estimates for general (unrestricted) censored samples. Several related variations on these improved methods are also presented and justified. - i
Resumo:
During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.
Resumo:
Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.
Resumo:
Literally, the word compliance suggests conformity in fulfilling official requirements. The thesis presents the results of the analysis and design of a class of protocols called compliant cryptologic protocols (CCP). The thesis presents a notion for compliance in cryptosystems that is conducive as a cryptologic goal. CCP are employed in security systems used by at least two mutually mistrusting sets of entities. The individuals in the sets of entities only trust the design of the security system and any trusted third party the security system may include. Such a security system can be thought of as a broker between the mistrusting sets of entities. In order to provide confidence in operation for the mistrusting sets of entities, CCP must provide compliance verification mechanisms. These mechanisms are employed either by all the entities or a set of authorised entities in the system to verify the compliance of the behaviour of various participating entities with the rules of the system. It is often stated that confidentiality, integrity and authentication are the primary interests of cryptology. It is evident from the literature that authentication mechanisms employ confidentiality and integrity services to achieve their goal. Therefore, the fundamental services that any cryptographic algorithm may provide are confidentiality and integrity only. Since controlling the behaviour of the entities is not a feasible cryptologic goal,the verification of the confidentiality of any data is a futile cryptologic exercise. For example, there exists no cryptologic mechanism that would prevent an entity from willingly or unwillingly exposing its private key corresponding to a certified public key. The confidentiality of the data can only be assumed. Therefore, any verification in cryptologic protocols must take the form of integrity verification mechanisms. Thus, compliance verification must take the form of integrity verification in cryptologic protocols. A definition of compliance that is conducive as a cryptologic goal is presented as a guarantee on the confidentiality and integrity services. The definitions are employed to provide a classification mechanism for various message formats in a cryptologic protocol. The classification assists in the characterisation of protocols, which assists in providing a focus for the goals of the research. The resulting concrete goal of the research is the study of those protocols that employ message formats to provide restricted confidentiality and universal integrity services to selected data. The thesis proposes an informal technique to understand, analyse and synthesise the integrity goals of a protocol system. The thesis contains a study of key recovery,electronic cash, peer-review, electronic auction, and electronic voting protocols. All these protocols contain message format that provide restricted confidentiality and universal integrity services to selected data. The study of key recovery systems aims to achieve robust key recovery relying only on the certification procedure and without the need for tamper-resistant system modules. The result of this study is a new technique for the design of key recovery systems called hybrid key escrow. The thesis identifies a class of compliant cryptologic protocols called secure selection protocols (SSP). The uniqueness of this class of protocols is the similarity in the goals of the member protocols, namely peer-review, electronic auction and electronic voting. The problem statement describing the goals of these protocols contain a tuple,(I, D), where I usually refers to an identity of a participant and D usually refers to the data selected by the participant. SSP are interested in providing confidentiality service to the tuple for hiding the relationship between I and D, and integrity service to the tuple after its formation to prevent the modification of the tuple. The thesis provides a schema to solve the instances of SSP by employing the electronic cash technology. The thesis makes a distinction between electronic cash technology and electronic payment technology. It will treat electronic cash technology to be a certification mechanism that allows the participants to obtain a certificate on their public key, without revealing the certificate or the public key to the certifier. The thesis abstracts the certificate and the public key as the data structure called anonymous token. It proposes design schemes for the peer-review, e-auction and e-voting protocols by employing the schema with the anonymous token abstraction. The thesis concludes by providing a variety of problem statements for future research that would further enrich the literature.
Resumo:
One approach to reducing the yield losses caused by banana viral diseases is the use of genetic engineering and pathogen-derived resistance strategies to generate resistant cultivars. The development of transgenic virus resistance requires an efficient banana transformation method, particularly for commercially important 'Cavendish' type cultivars such as 'Grand Nain'. Prior to this study, only two examples of the stable transformation of banana had been reported, both of which demonstrated the principle of transformation but did not characterise transgenic plants in terms of the efficiency at which individual transgenic lines were generated, relative activities of promoters in stably transformed plants, and the stability of transgene expression. The aim of this study was to develop more efficient transformation methods for banana, assess the activity of some commonly used and also novel promoters in stably transformed plants, and transform banana with genes that could potentially confer resistance to banana bunchy top nanovirus (BBTV) and banana bract mosaic potyvirus (BBrMV). A regeneration system using immature male flowers as the explant was established. The frequency of somatic embryogenesis in male flower explants was influenced by the season in which the inflorescences were harvested. Further, the media requirements of various banana cultivars in respect to the 2,4-D concentration in the initiation media also differed. Following the optimisation of these and other parameters, embryogenic cell suspensions of several banana (Musa spp.) cultivars including 'Grand Nain' (AAA), 'Williams' (AAA), 'SH-3362' (AA), 'Goldfinger' (AAAB) and 'Bluggoe' (ABB) were successfully generated. Highly efficient transformation methods were developed for both 'Bluggoe' and 'Grand Nain'; this is the first report of microprojectile bombardment transformation of the commercially important 'Grand Nain' cultivar. Following bombardment of embryogenic suspension cells, regeneration was monitored from single transfom1ed cells to whole plants using a reporter gene encoding the green fluorescent protein (gfp). Selection with kanamycin enabled the regeneration of a greater number of plants than with geneticin, while still preventing the regeneration of non-transformed plants. Southern hybridisation confirmed the neomycin phosphotransferase gene (npt II) was stably integrated into the banana genome and that multiple transgenic lines were derived from single bombardments. The activity, stability and tissue specificity of the cauliflower mosaic virus 358 (CaMV 35S) and maize polyubiquitin-1 (Ubi-1) promoters were examined. In stably transformed banana, the Ubi-1 promoter provided approximately six-fold higher p-glucuronidase (GUS) activity than the CaMV 35S promoter, and both promoters remained active in glasshouse grown plants for the six months they were observed. The intergenic regions ofBBTV DNA-I to -6 were isolated and fused to either the uidA (GUS) or gfjJ reporter genes to assess their promoter activities. BBTV promoter activity was detected in banana embryogenic cells using the gfp reporter gene. Promoters derived from BBTV DNA-4 and -5 generated the highest levels of transient activity, which were greater than that generated by the maize Ubi-1 promoter. In transgenic banana plants, the activity of the BBTV DNA-6 promoter (BT6.1) was restricted to the phloem of leaves and roots, stomata and root meristems. The activity of the BT6.1 promoter was enhanced by the inclusion of intron-containing fragments derived from the maize Ubi-1, rice Act-1, and sugarcane rbcS 5' untranslated regions in GUS reporter gene constructs. In transient assays in banana, the rice Act-1 and maize Ubi-1 introns provided the most significant enhancement, increasing expression levels 300-fold and 100-fold, respectively. The sugarcane rbcS intron increased expression about 10-fold. In stably transformed banana plants, the maize Ubi-1 intron enhanced BT6.1 promoter activity to levels similar to that of the CaMV 35S promoter, but did not appear to alter the tissue specificity of the promoter. Both 'Grand Nain' and 'Bluggoe' were transformed with constructs that could potentially confer resistance to BBTV and BBrMV, including constructs containing BBTV DNA-1 major and internal genes, BBTV DNA-5 gene, and the BBrMV coat protein-coding region all under the control of the Ubi-1 promoter, while the BT6 promoter was used to drive the npt II selectable marker gene. At least 30 transgenic lines containing each construct were identified and replicates of each line are currently being generated by micropropagation in preparation for virus challenge.
Resumo:
Developmental progression and differentiation of distinct cell types depend on the regulation of gene expression in space and time. Tools that allow spatial and temporal control of gene expression are crucial for the accurate elucidation of gene function. Most systems to manipulate gene expression allow control of only one factor, space or time, and currently available systems that control both temporal and spatial expression of genes have their limitations. We have developed a versatile two-component system that overcomes these limitations, providing reliable, conditional gene activation in restricted tissues or cell types. This system allows conditional tissue-specific ectopic gene expression and provides a tool for conditional cell type- or tissue-specific complementation of mutants. The chimeric transcription factor XVE, in conjunction with Gateway recombination cloning technology, was used to generate a tractable system that can efficiently and faithfully activate target genes in a variety of cell types. Six promoters/enhancers, each with different tissue specificities (including vascular tissue, trichomes, root, and reproductive cell types), were used in activation constructs to generate different expression patterns of XVE. Conditional transactivation of reporter genes was achieved in a predictable, tissue-specific pattern of expression, following the insertion of the activator or the responder T-DNA in a wide variety of positions in the genome. Expression patterns were faithfully replicated in independent transgenic plant lines. Results demonstrate that we can also induce mutant phenotypes using conditional ectopic gene expression. One of these mutant phenotypes could not have been identified using noninducible ectopic gene expression approaches.
Resumo:
Establishing a nationwide Electronic Health Record system has become a primary objective for many countries around the world, including Australia, in order to improve the quality of healthcare while at the same time decreasing its cost. Doing so will require federating the large number of patient data repositories currently in use throughout the country. However, implementation of EHR systems is being hindered by several obstacles, among them concerns about data privacy and trustworthiness. Current IT solutions fail to satisfy patients’ privacy desires and do not provide a trustworthiness measure for medical data. This thesis starts with the observation that existing EHR system proposals suer from six serious shortcomings that aect patients’ privacy and safety, and medical practitioners’ trust in EHR data: accuracy and privacy concerns over linking patients’ existing medical records; the inability of patients to have control over who accesses their private data; the inability to protect against inferences about patients’ sensitive data; the lack of a mechanism for evaluating the trustworthiness of medical data; and the failure of current healthcare workflow processes to capture and enforce patient’s privacy desires. Following an action research method, this thesis addresses the above shortcomings by firstly proposing an architecture for linking electronic medical records in an accurate and private way where patients are given control over what information can be revealed about them. This is accomplished by extending the structure and protocols introduced in federated identity management to link a patient’s EHR to his existing medical records by using pseudonym identifiers. Secondly, a privacy-aware access control model is developed to satisfy patients’ privacy requirements. The model is developed by integrating three standard access control models in a way that gives patients access control over their private data and ensures that legitimate uses of EHRs are not hindered. Thirdly, a probabilistic approach for detecting and restricting inference channels resulting from publicly-available medical data is developed to guard against indirect accesses to a patient’s private data. This approach is based upon a Bayesian network and the causal probabilistic relations that exist between medical data fields. The resulting definitions and algorithms show how an inference channel can be detected and restricted to satisfy patients’ expressed privacy goals. Fourthly, a medical data trustworthiness assessment model is developed to evaluate the quality of medical data by assessing the trustworthiness of its sources (e.g. a healthcare provider or medical practitioner). In this model, Beta and Dirichlet reputation systems are used to collect reputation scores about medical data sources and these are used to compute the trustworthiness of medical data via subjective logic. Finally, an extension is made to healthcare workflow management processes to capture and enforce patients’ privacy policies. This is accomplished by developing a conceptual model that introduces new workflow notions to make the workflow management system aware of a patient’s privacy requirements. These extensions are then implemented in the YAWL workflow management system.
Resumo:
The Rudd Labour Government rode to power in Australia on the education promise of 'an education revolution'. The term 'education revolution' carries all the obligatory marketing metaphors that an aspirant government might want recognised by the general public on the eve government came to power however in revolutionary terms it fades into insignificance in comparison to the real revolution in Australian education. This revolution simply put is to elevate Indigenous Knowledge Systems, in Australian Universities. In the forty three years since the nation setting Referendum of 1967 a generation has made a beach head on the educational landscape. Now a further generation who having made it into the field of higher degrees yearn for the ways and means to authentically marshal Indigenous knowledge? The Institute of Koorie Education at Deakin has for over twenty years not only witnessed the transition but is also a leader in the field. With the appointment of two Chairs of Indigenous Knowledge Systems to build on to its already established research profile the Institute moved towards what is the 'real revolution' in education – the elevation of Indigenous Knowledge as a legitimate knowledge system. This paper lays out the Institute of Koorie Education‘s Research Plan and the basis of an argument put to the academy that will be the driver for this pursuit.