456 resultados para UNKNOWN INPUTS
Resumo:
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.
Resumo:
The world is rapidly ageing. It is against this backdrop that there are increasing incidences of dementia reported worldwide, with Alzheimer's disease (AD) being the most common form of dementia in the elderly. It is estimated that AD affects almost 4 million people in the US, and costs the US economy more than 65 million dollars annually. There is currently no cure for AD but various therapeutic agents have been employed in attempting to slow down the progression of the illness, one of which is oestrogen. Over the last decades, scientists have focused mainly on the roles of oestrogen in the prevention and treatment of AD. Newer evidences suggested that testosterone might also be involved in the pathogenesis of AD. Although the exact mechanisms on how androgen might affect AD are still largely unknown, it is known that testosterone can act directly via androgen receptor-dependent mechanisms or indirectly by converting to oestrogen to exert this effect. Clinical trials need to be conducted to ascertain the putative role of androgen replacement in Alzheimer's disease.
Resumo:
Since the 1980s, when the concept of innovation systems(IS) was first presented(Freeman, 2004), a large body of work has been done on IS. IS is a framework that consists of elements related to innovation activities, such as innovation actors,institutional environments, and the relationship between those elements (Lundvall,1992; Nelson, 1993). Studies on NIS/RIS aim to understand the structures and dynamics of IS (Lundvall, 1992; Nelson, 1993), mainly through case studies and comparative case studies(Archibugi, 1996; MacDowall, 1984; Mowery, 1998;Radosevic, 2000). Research on IS has extended from the national level (NIS) to the regional level (RIS) (Cooke, Uranga, & Etxebarria, 1997; Cooke, Uranga, & Etxebarria, 1998), and from developed economies to developing economies. RIS is vital, especially for a large and diverse countries(Edquist, 2004) like China. More recently, based on the literature of NIS, Furman, Porter and Scott (2002)introduced the framework of national innovation capacity (NIC), which employs a quantitative approach to understanding to what degree elements of NIS impact on innovation capacity. Regional innovation capacity (RIC) is the adaption of NIC at the regional level. Although regional level research is important there is limited work done on RIC and there is even less in transitional economies, which are different to developed countries. To better understand RIC in transitional countries this thesis conducted a study of 30 administrative regions in Mainland China between 1991 and 2005. To establish the key factors driving RIC in China the study explored the impact of three elements in the innovation system;(a) innovation actors, (b) innovation inputs, and (c)international and domestic innovation system interactions.
Resumo:
Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.
Resumo:
Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.
Resumo:
In Queensland, there is little research that speaks to the historical experiences of schooling. Aboriginal education remains a part of the silenced history of Aboriginal people. This thesis presents stories of schooling from Aboriginal people across three generations of adult storytellers. Elders, grandparents, and young parents involved in an early childhood urban playgroup were included. Stories from the children attending the playgroup were also welcomed. The research methodology involved narrative storywork. This is culturally appropriate because Aboriginal stories connect the past with the present. The conceptual framework for the research draws on decolonising theory. Typically, reports of Aboriginal schooling and outcomes position Aboriginal families and children within a deficit discourse. The issues and challenges faced by urban Murri families who have young children or children in school are largely unknown. This research allowed Aboriginal families to participate in an engaged dialogue about their childhood and offered opportunities to tell their stories of education. Key research questions were: What was the reality of school for different generations of Indigenous people? What beliefs and values are held about mainstream education for Indigenous children? What ideas are communicated about school across generations? Narratives from five elders, five grandparents, and five (urban) mothers of young Indigenous children are presented. The elders offer testimony on their recollected experiences of schooling in a mission, a Yumba school (fringe-dwellers’ camp), and country schools. Their stories also speak to the need to pass as non-indigenous and act as “white”. The next generation of storytellers are the grandparents and they speak to their lives as “stolen children”. The final story tellers are the Murri parents. They speak to the current and recent past of education, as well as their family experiences as they parent young children who are about to enter school or who are in the early years of school.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
This paper presents a nonlinear gust-attenuation controller based on constrained neural-network (NN) theory. The controller aims to achieve sufficient stability and handling quality for a fixed-wing unmanned aerial system (UAS) in a gusty environment when control inputs are subjected to constraints. Constraints in inputs emulate situations where aircraft actuators fail requiring the aircraft to be operated with fail-safe capability. The proposed controller enables gust-attenuation property and stabilizes the aircraft dynamics in a gusty environment. The proposed flight controller is obtained by solving the Hamilton-Jacobi-Isaacs (HJI) equations based on an policy iteration (PI) approach. Performance of the controller is evaluated using a high-fidelity six degree-of-freedom Shadow UAS model. Simulations show that our controller demonstrates great performance improvement in a gusty environment, especially in angle-of-attack (AOA), pitch and pitch rate. Comparative studies are conducted with the proportional-integral-derivative (PID) controllers, justifying the efficiency of our controller and verifying its suitability for integration into the design of flight control systems for forced landing of UASs.
Resumo:
The sheep (Ovis aries) is commonly used as a large animal model in skeletal research. Although the sheep genome has been sequenced there are still only a limited number of annotated mRNA sequences in public databases. A complementary DNA (cDNA) library was constructed to provide a generic resource for further exploration of genes that are actively expressed in bone cells in sheep. It was anticipated that the cDNA library would provide molecular tools for further research into the process of fracture repair and bone homeostasis, and add to the existing body of knowledge. One of the hallmarks of cDNA libraries has been the identification of novel genes and in this library the full open reading frame of the gene C12orf29 was cloned and characterised. This gene codes for a protein of unknown function with a molecular weight of 37 kDa. A literature search showed that no previous studies had been conducted into the biological role of C12orf29, except for some bioinformatics studies that suggested a possible link with cancer. Phylogenetic analyses revealed that C12orf29 had an ancient pedigree with a homologous gene found in some bacterial taxa. This implied that the gene was present in the last common eukaryotic ancestor, thought to have existed more than 2 billion years ago. This notion was further supported by the fact that the gene is found in taxa belonging to the two major eukaryotic branches, bikonts and unikonts. In the bikont supergroup a C12orf29-like gene was found in the single celled protist Naegleria gruberi, whereas in the unikont supergroup, encompassing the metazoa, the gene is universal to all chordate and, therefore, vertebrate species. It appears to have been lost to the majority of cnidaria and protostomes taxa; however, C12orf29-like genes have been found in the cnidarian freshwater hydra and the protostome Pacific oyster. The experimental data indicate that C12orf29 has a structural role in skeletal development and tissue homeostasis, whereas in silico analysis of the human C12orf29 promoter region suggests that its expression is potentially under the control of the NOTCH, WNT and TGF- developmental pathways, as well SOX9 and BAPX1; pathways that are all heavily involved in skeletogenesis. Taken together, this investigation provides strong evidence that C12orf29 has a very important role in the chordate body plan, in early skeletal development, cartilage homeostasis, and also a possible link with spina bifida in humans.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
Stereo visual odometry has received little investigation in high altitude applications due to the generally poor performance of rigid stereo rigs at extremely small baseline-to-depth ratios. Without additional sensing, metric scale is considered lost and odometry is seen as effective only for monocular perspectives. This paper presents a novel modification to stereo based visual odometry that allows accurate, metric pose estimation from high altitudes, even in the presence of poor calibration and without additional sensor inputs. By relaxing the (typically fixed) stereo transform during bundle adjustment and reducing the dependence on the fixed geometry for triangulation, metrically scaled visual odometry can be obtained in situations where high altitude and structural deformation from vibration would cause traditional algorithms to fail. This is achieved through the use of a novel constrained bundle adjustment routine and accurately scaled pose initializer. We present visual odometry results demonstrating the technique on a short-baseline stereo pair inside a fixed-wing UAV flying at significant height (~30-100m).
Resumo:
Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.
Resumo:
After attending this presentation, attendees will gain awareness of: (1) the error and uncertainty associated with the application of the Suchey-Brooks (S-B) method of age estimation of the pubic symphysis to a contemporary Australian population; (2) the implications of sexual dimorphism and bilateral asymmetry of the pubic symphysis through preliminary geometric morphometric assessment; and (3) the value of three-dimensional (3D) autopsy data acquisition for creating forensic anthropological standards. This presentation will impact the forensic science community by demonstrating that, in the absence of demographically sound skeletal collections, post-mortem autopsy data provides an exciting platform for the construction of large contemporary ‘virtual osteological libraries’ for which forensic anthropological research can be conducted on Australian individuals. More specifically, this study assesses the applicability and accuracy of the S-B method to a contemporary adult population in Queensland, Australia, and using a geometric morphometric approach, provides an insight to the age-related degeneration of the pubic symphysis. Despite the prominent use of the Suchey-Brooks (1990) method of age estimation in forensic anthropological practice, it is subject to intrinsic limitations, with reports of differential inter-population error rates between geographical locations1-4. Australian forensic anthropology is constrained by a paucity of population specific standards due to a lack of repositories of documented skeletons. Consequently, in Australian casework proceedings, standards constructed from predominately American reference samples are applied to establish a biological profile. In the global era of terrorism and natural disasters, more specific population standards are required to improve the efficiency of medico-legal death investigation in Queensland. The sample comprises multi-slice computed tomography (MSCT) scans of the pubic symphysis (slice thickness: 0.5mm, overlap: 0.1mm) on 195 individuals of caucasian ethnicity aged 15-70 years. Volume rendering reconstruction of the symphyseal surface was conducted in Amira® (v.4.1) and quantitative analyses in Rapidform® XOS. The sample was divided into ten-year age sub-sets (eg. 15-24) with a final sub-set of 65-70 years. Error with respect to the method’s assigned means were analysed on the basis of bias (directionality of error), inaccuracy (magnitude of error) and percentage correct classification of left and right symphyseal surfaces. Morphometric variables including surface area, circumference, maximum height and width of the symphyseal surface and micro-architectural assessment of cortical and trabecular bone composition were quantified using novel automated engineering software capabilities. The results of this study demonstrated correct age classification utilizing the mean and standard deviations of each phase of the S-B method of 80.02% and 86.18% in Australian males and females, respectively. Application of the S-B method resulted in positive biases and mean inaccuracies of 7.24 (±6.56) years for individuals less than 55 years of age, compared to negative biases and mean inaccuracies of 5.89 (±3.90) years for individuals greater than 55 years of age. Statistically significant differences between chronological and S-B mean age were demonstrated in 83.33% and 50% of the six age subsets in males and females, respectively. Asymmetry of the pubic symphysis was a frequent phenomenon with 53.33% of the Queensland population exhibiting statistically significant (χ2 - p<0.01) differential phase classification of left and right surfaces of the same individual. Directionality was found in bilateral asymmetry, with the right symphyseal faces being slightly older on average and providing more accurate estimates using the S-B method5. Morphometric analysis verified these findings, with the left surface exhibiting significantly greater circumference and surface area than the right (p<0.05). Morphometric analysis demonstrated an increase in maximum height and width of the surface with age, with most significant changes (p<0.05) occurring between the 25-34 and 55-64 year age subsets. These differences may be attributed to hormonal components linked to menopause in females and a reduction in testosterone in males. Micro-architectural analysis demonstrated degradation of cortical composition with age, with differential bone resorption between the medial, ventral and dorsal surfaces of the pubic symphysis. This study recommends that the S-B method be applied with caution in medico-legal death investigations of unknown skeletal remains in Queensland. Age estimation will always be accompanied by error; therefore this study demonstrates the potential for quantitative morphometric modelling of age related changes of the pubic symphysis as a tool for methodological refinement, providing a rigor and robust assessment to remove the subjectivity associated with current pelvic aging methods.
Resumo:
To enhance the therapeutic efficacy and reduce the adverse effects of traditional Chinese medicine, practitioners often prescribe combinations of plant species and/or minerals, called formulae. Unfortunately, the working mechanisms of most of these compounds are difficult to determine and thus remain unknown. In an attempt to address the benefits of formulae based on current biomedical approaches, we analyzed the components of Yinchenhao Tang, a classical formula that has been shown to be clinically effective for treating hepatic injury syndrome. The three principal components of Yinchenhao Tang are Artemisia annua L., Gardenia jasminoids Ellis, and Rheum Palmatum L., whose major active ingredients are 6,7-dimethylesculetin (D), geniposide (G), and rhein (R), respectively. To determine the mechanisms underlying the efficacy of this formula, we conducted a systematic analysis of the therapeutic effects of the DGR compound using immunohistochemistry, biochemistry, metabolomics, and proteomics. Here, we report that the DGR combination exerts a more robust therapeutic effect than any one or two of the three individual compounds by hitting multiple targets in a rat model of hepatic injury. Thus, DGR synergistically causes intensified dynamic changes in metabolic biomarkers, regulates molecular networks through target proteins, has a synergistic/additive effect, and activates both intrinsic and extrinsic pathways.
Resumo:
Performance in the construction industry is increasingly scrutinized as a result of the delays, cost overruns and poor quality of the industry’s products and services. Increasingly, disputes, conflicts and mismatches of objectives among participants are contributory factors. Performance measurement approaches have been developed to overcome these problems. However, these approaches focus primarily on objective measures to the exclusion of subjective measures, particularly those concerning contractor satisfaction (Co-S). The contractor satisfaction model (CoSMo) developed here is intended to rectify the situation. Data derived from a questionnaire survey of 75 large contractors in Malaysia in respect of a key project are analysed to identify participant factors and their strength of relationship with Co-S dimensions. The results are presented in the form of eight regression equations. The outcome is a tool for use by project participants to provide a better understanding of how they, and the project, affect contractor satisfaction. The developed model sheds some light on a hitherto unknown aspect of construction management in providing an increased awareness of the importance of major Malaysian construction contractors’ needs in the execution of successful projects.