845 resultados para outsourcing (make or buy)
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
This study, to elucidate the role of des(1-3)IGF-I in the maturation of IGF-I,used two strategies. The first was to detect the presence of enzymes in tissues, which would act on IGF-I to produce des(1-3)IGF-I, and the second was to detect the potential products of such enzymic activity, namely Gly-Pro-Glu(GPE), Gly-Pro(GP) and des(l- 3)IGF-I. No neutral tripeptidyl peptidase (TPP II), which would release the tripeptide GPE from IGF-I, was detected in brain, urine nor in red or white blood cells. The TPPlike activity which was detected, was attributed to a combined action of a dipeptidyl peptidase (DPP N) and an aminopeptidase (AP A). A true TPP II was, however, detected in platelets. Two purified TPP II enzymes were investigated but they did not release GPE from IGF-I under a variety of conditions. Consequently, TPP II seemed unlikely to participate in the formation of des(1-3)IGF-I. In contrast, an acidic tripeptidyl peptidase activity (TPP I) was detected in brain and colostrum, the former with a pH optimum of 4.5 and the latter 3.8. It seems likely that such an enzyme would participate in the formation of des( 1-3 )IGF-I in these tissues in vitro, ie. that des(1-3)IGF-I may have been produced as an artifact in the isolation of IGF-I from brain and colostrum in acidic conditions. This contrasts with suggestions of an in vivo role for des(1-3)IGF-I, as reported by others. The activity of a dipeptidyl peptidase N (DPP N) from urine, which should release the dipeptide GP from IGF-I, was assessed under a variety of conditions and with a variety of additives and potential enzyme stimulants, but there was no release of GP. The DPP N also exhibited a transferase activity with synthetic substrates in the presence of dipeptides, at lower concentrations than previously reported for other acceptors or other proteolytic enzymes. In addition, a low concentration of a product,possibly the tetrapeptide Gly-Pro-Gly-Leu, was detected with the action of the enzyme on IGF-I in the presence of the dipeptide Gly-Leu. As part of attempts to detect tissue production of des(1-3)IGF-I, a monoclonal antibody (MAb ), directed towards the GPE- end ofiGF-I was produced by immunisation with a 10-mer covalently attached to a carrier protein. By the use of indirect ELISA and inhibitor studies, the MAb was shown to selectively recognise peptides with anNterminal GPE- sequence, and applied to the indirect detection of des(1-3)IGF-I. The concentration of GPE in brain, measured by mass spectrometry ( MS), was low, and the concentration of total IGF-I (measured by ELISA with a commercial polyclonal antibody [P Ab]) was 40 times higher at 50 nmol/kg. This also, was not consistent with the action of a tripeptidyl peptidase in brain that converted all IGF-I to des(1-3)IGF-I plus GPE. Contrasting ELISA results, using the MAb prepared in this study, suggest an even higher concentration of intact IGF-I of 150 nmollkg. This would argue against the presence of any des( 1-3 )IGF-I in brain, but in turn, this indicates either the presence of other substances containing a GPE amino-terminus or other cross reacting epitope. Although the results of the specificity studies reported in Chapter 5 would make this latter possibility seem unlikely, it cannot be completely excluded. No GP was detected in brain by MS. No GPE was detected in colostrum by capillary electrophoresis (CE) but the interference from extraneous substances reduced the detectability of GPE by CE and this approach would require further, prior, purification and concentration steps. A molecule, with a migration time equal to that of the peptide GP, was detected in colostrum by CE, but the concentration (~ 10 11mo/L) was much higher than the IGF-I concentration measured by radio-immunoassay using a PAb (80 nmol/L) or using a Mab (300-400 nmolL). A DPP IV enzyme was detected in colostrum and this could account for the GP, derived from substrates other than IGF-1. Based on the differential results of the two antibody assays, there was no indication of the presence of des(1-3)IGF-I in brain or colostrum. In the absence of any enzyme activity directed towards the amino terminus of IGF-I and the absence any potential products, IGF-I, therefore, does not appear to "mature" via des(1-3)IGF-I in the brain, nor in the neutral colostrum. In spite of these results which indicate the absence of an enzymic attack on IGF-I and the absence of the expected products in tissues, the possibility that the conversion of IGF-I may occur in neutral conditions in limited amounts, cannot be ruled out. It remains possible that in the extracellular environment of the membrane, a complex interaction of IGF-I, binding protein, aminopeptidase(s) and receptor, produces des(1- 3)IGF-I as a transient product which is bound to the receptor and internalised.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Continuum mechanics provides a mathematical framework for modelling the physical stresses experienced by a material. Recent studies show that physical stresses play an important role in a wide variety of biological processes, including dermal wound healing, soft tissue growth and morphogenesis. Thus, continuum mechanics is a useful mathematical tool for modelling a range of biological phenomena. Unfortunately, classical continuum mechanics is of limited use in biomechanical problems. As cells refashion the �bres that make up a soft tissue, they sometimes alter the tissue's fundamental mechanical structure. Advanced mathematical techniques are needed in order to accurately describe this sort of biological `plasticity'. A number of such techniques have been proposed by previous researchers. However, models that incorporate biological plasticity tend to be very complicated. Furthermore, these models are often di�cult to apply and/or interpret, making them of limited practical use. One alternative approach is to ignore biological plasticity and use classical continuum mechanics. For example, most mechanochemical models of dermal wound healing assume that the skin behaves as a linear viscoelastic solid. Our analysis indicates that this assumption leads to physically unrealistic results. In this thesis we present a novel and practical approach to modelling biological plasticity. Our principal aim is to combine the simplicity of classical linear models with the sophistication of plasticity theory. To achieve this, we perform a careful mathematical analysis of the concept of a `zero stress state'. This leads us to a formal de�nition of strain that is appropriate for materials that undergo internal remodelling. Next, we consider the evolution of the zero stress state over time. We develop a novel theory of `morphoelasticity' that can be used to describe how the zero stress state changes in response to growth and remodelling. Importantly, our work yields an intuitive and internally consistent way of modelling anisotropic growth. Furthermore, we are able to use our theory of morphoelasticity to develop evolution equations for elastic strain. We also present some applications of our theory. For example, we show that morphoelasticity can be used to obtain a constitutive law for a Maxwell viscoelastic uid that is valid at large deformation gradients. Similarly, we analyse a morphoelastic model of the stress-dependent growth of a tumour spheroid. This work leads to the prediction that a tumour spheroid will always be in a state of radial compression and circumferential tension. Finally, we conclude by presenting a novel mechanochemical model of dermal wound healing that takes into account the plasticity of the healing skin.
Resumo:
Every day we hear someone complain that this or that patent should not have been granted. People complain that the patent system is now a threat to existing business and innovation be- cause the patent office grants with alarming regularity patents for inventions that are neither novel nor non-obvious. People argue that the patent office cannot keep up with the job of examining the backlog of hundreds of thousands of patents and that, even if it could, the large volumes of prior art literature that need to be considered each time a patent application is received make the decision as to whether a patent should be granted or not a treacherous one.
Resumo:
Expert knowledge is valuable in many modelling endeavours, particularly where data is not extensive or sufficiently robust. In Bayesian statistics, expert opinion may be formulated as informative priors, to provide an honest reflection of the current state of knowledge, before updating this with new information. Technology is increasingly being exploited to help support the process of eliciting such information. This paper reviews the benefits that have been gained from utilizing technology in this way. These benefits can be structured within a six-step elicitation design framework proposed recently (Low Choy et al., 2009). We assume that the purpose of elicitation is to formulate a Bayesian statistical prior, either to provide a standalone expert-defined model, or for updating new data within a Bayesian analysis. We also assume that the model has been pre-specified before selecting the software. In this case, technology has the most to offer to: targeting what experts know (E2), eliciting and encoding expert opinions (E4), whilst enhancing accuracy (E5), and providing an effective and efficient protocol (E6). Benefits include: -providing an environment with familiar nuances (to make the expert comfortable) where experts can explore their knowledge from various perspectives (E2); -automating tedious or repetitive tasks, thereby minimizing calculation errors, as well as encouraging interaction between elicitors and experts (E5); -cognitive gains by educating users, enabling instant feedback (E2, E4-E5), and providing alternative methods of communicating assessments and feedback information, since experts think and learn differently; and -ensuring a repeatable and transparent protocol is used (E6).
Resumo:
Conifers are resistant to attack from a large number of potential herbivores or pathogens. Previous molecular and biochemical characterization of selected conifer defence systems support a model of multigenic, constitutive and induced defences that act on invading insects via physical, chemical, biochemical or ecological (multitrophic) mechanisms. However, the genomic foundation of the complex defence and resistance mechanisms of conifers is largely unknown. As part of a genomics strategy to characterize inducible defences and possible resistance mechanisms of conifers against insect herbivory, we developed a cDNA microarray building upon a new spruce (Picea spp.) expressed sequence tag resource. This first-generation spruce cDNA microarray contains 9720 cDNA elements representing c. 5500 unique genes. We used this array to monitor gene expression in Sitka spruce (Picea sitchensis) bark in response to herbivory by white pine weevils (Pissodes strobi, Curculionidae) or wounding, and in young shoot tips in response to western spruce budworm (Choristoneura occidentalis, Lepidopterae) feeding. Weevils are stem-boring insects that feed on phloem, while budworms are foliage feeding larvae that consume needles and young shoot tips. Both insect species and wounding treatment caused substantial changes of the host plant transcriptome detected in each case by differential gene expression of several thousand array elements at 1 or 2 d after the onset of treatment. Overall, there was considerable overlap among differentially expressed gene sets from these three stress treatments. Functional classification of the induced transcripts revealed genes with roles in general plant defence, octadecanoid and ethylene signalling, transport, secondary metabolism, and transcriptional regulation. Several genes involved in primary metabolic processes such as photosynthesis were down-regulated upon insect feeding or wounding, fitting with the concept of dynamic resource allocation in plant defence. Refined expression analysis using gene-specific primers and real-time PCR for selected transcripts was in agreement with microarray results for most genes tested. This study provides the first large-scale survey of insect-induced defence transcripts in a gymnosperm and provides a platform for functional investigation of plant-insect interactions in spruce. Induction of spruce genes of octadecanoid and ethylene signalling, terpenoid biosynthesis, and phenolic secondary metabolism are discussed in more detail.
Resumo:
Establishing a nationwide Electronic Health Record system has become a primary objective for many countries around the world, including Australia, in order to improve the quality of healthcare while at the same time decreasing its cost. Doing so will require federating the large number of patient data repositories currently in use throughout the country. However, implementation of EHR systems is being hindered by several obstacles, among them concerns about data privacy and trustworthiness. Current IT solutions fail to satisfy patients’ privacy desires and do not provide a trustworthiness measure for medical data. This thesis starts with the observation that existing EHR system proposals suer from six serious shortcomings that aect patients’ privacy and safety, and medical practitioners’ trust in EHR data: accuracy and privacy concerns over linking patients’ existing medical records; the inability of patients to have control over who accesses their private data; the inability to protect against inferences about patients’ sensitive data; the lack of a mechanism for evaluating the trustworthiness of medical data; and the failure of current healthcare workflow processes to capture and enforce patient’s privacy desires. Following an action research method, this thesis addresses the above shortcomings by firstly proposing an architecture for linking electronic medical records in an accurate and private way where patients are given control over what information can be revealed about them. This is accomplished by extending the structure and protocols introduced in federated identity management to link a patient’s EHR to his existing medical records by using pseudonym identifiers. Secondly, a privacy-aware access control model is developed to satisfy patients’ privacy requirements. The model is developed by integrating three standard access control models in a way that gives patients access control over their private data and ensures that legitimate uses of EHRs are not hindered. Thirdly, a probabilistic approach for detecting and restricting inference channels resulting from publicly-available medical data is developed to guard against indirect accesses to a patient’s private data. This approach is based upon a Bayesian network and the causal probabilistic relations that exist between medical data fields. The resulting definitions and algorithms show how an inference channel can be detected and restricted to satisfy patients’ expressed privacy goals. Fourthly, a medical data trustworthiness assessment model is developed to evaluate the quality of medical data by assessing the trustworthiness of its sources (e.g. a healthcare provider or medical practitioner). In this model, Beta and Dirichlet reputation systems are used to collect reputation scores about medical data sources and these are used to compute the trustworthiness of medical data via subjective logic. Finally, an extension is made to healthcare workflow management processes to capture and enforce patients’ privacy policies. This is accomplished by developing a conceptual model that introduces new workflow notions to make the workflow management system aware of a patient’s privacy requirements. These extensions are then implemented in the YAWL workflow management system.
Resumo:
With the increase in the level of global warming, renewable energy based distributed generators (DGs) will increasingly play a dominant role in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cells and micro turbines will gain considerable momentum in the near future. A microgrid consists of clusters of load and distributed generators that operate as a single controllable system. The interconnection of the DG to the utility/grid through power electronic converters has raised concern about safe operation and protection of the equipments. Many innovative control techniques have been used for enhancing the stability of microgrid as for proper load sharing. The most common method is the use of droop characteristics for decentralized load sharing. Parallel converters have been controlled to deliver desired real power (and reactive power) to the system. Local signals are used as feedback to control converters, since in a real system, the distance between the converters may make the inter-communication impractical. The real and reactive power sharing can be achieved by controlling two independent quantities, frequency and fundamental voltage magnitude. In this thesis, an angle droop controller is proposed to share power amongst converter interfaced DGs in a microgrid. As the angle of the output voltage can be changed instantaneously in a voltage source converter (VSC), controlling the angle to control the real power is always beneficial for quick attainment of steady state. Thus in converter based DGs, load sharing can be performed by drooping the converter output voltage magnitude and its angle instead of frequency. The angle control results in much lesser frequency variation compared to that with frequency droop. An enhanced frequency droop controller is proposed for better dynamic response and smooth transition between grid connected and islanded modes of operation. A modular controller structure with modified control loop is proposed for better load sharing between the parallel connected converters in a distributed generation system. Moreover, a method for smooth transition between grid connected and islanded modes is proposed. Power quality enhanced operation of a microgrid in presence of unbalanced and non-linear loads is also addressed in which the DGs act as compensators. The compensator can perform load balancing, harmonic compensation and reactive power control while supplying real power to the grid A frequency and voltage isolation technique between microgrid and utility is proposed by using a back-to-back converter. As utility and microgrid are totally isolated, the voltage or frequency fluctuations in the utility side do not affect the microgrid loads and vice versa. Another advantage of this scheme is that a bidirectional regulated power flow can be achieved by the back-to-back converter structure. For accurate load sharing, the droop gains have to be high, which has the potential of making the system unstable. Therefore the choice of droop gains is often a tradeoff between power sharing and stability. To improve this situation, a supplementary droop controller is proposed. A small signal model of the system is developed, based on which the parameters of the supplementary controller are designed. Two methods are proposed for load sharing in an autonomous microgrid in rural network with high R/X ratio lines. The first method proposes power sharing without any communication between the DGs. The feedback quantities and the gain matrixes are transformed with a transformation matrix based on the line R/X ratio. The second method involves minimal communication among the DGs. The converter output voltage angle reference is modified based on the active and reactive power flow in the line connected at point of common coupling (PCC). It is shown that a more economical and proper power sharing solution is possible with the web based communication of the power flow quantities. All the proposed methods are verified through PSCAD simulations. The converters are modeled with IGBT switches and anti parallel diodes with associated snubber circuits. All the rotating machines are modeled in detail including their dynamics.
Resumo:
This paper reports on statements from Professional Development participants who were asked to comment on NAPLAN. The participants were involved in a project designed by the YuMi Deadly Centre (YDC) for implementation into 25 Queensland School to enhance the teaching and learning of mathematics to Aboriginal and Torres Strait Islander students and low SES students. Using an action research framework and a survey questionnaire, the preliminary data obtained from participating principals is mixed, with statements indicating that NAPLAN is a high priority for some schools while others indicated that it does not “tell” the whole story of student learning.
Resumo:
Increasingly, software is no longer developed as a single system, but rather as a smart combination of so-called software services. Each of these provides an independent, specific and relatively small piece of functionality, which is typically accessible through the Internet from internal or external service providers. To the best of our knowledge, there are no standards or models that describe the sourcing process of these software based services (SBS). We identify the sourcing requirements for SBS and associate the key characteristics of SBS (with the sourcing requirements introduced). Furthermore, we investigate the sourcing of SBS with the related works in the field of classical procurement, business process outsourcing, and information systems sourcing. Based on the analysis, we conclude that the direct adoption of these approaches for SBS is not feasible and new approaches are required for sourcing SBS.
Resumo:
Increasingly, software is no longer developed as a single system, but rather as a smart combination of so-called software services. Each of these provides an independent, specific and relatively small piece of functionality, which is typically accessible through the Internet from internal or external service providers. There are no standards or models that describe the sourcing process of these software based services (SBS). The authors identify the sourcing requirements for SBS and associate the key characteristics of SBS (with the sourcing requirements introduced). Furthermore, this paper investigates the sourcing of SBS with the related works in the field of classical procurement, business process outsourcing, and information systems sourcing. Based on the analysis, the authors conclude that the direct adoption of these approaches for SBS is not feasible and new approaches are required for sourcing SBS.