141 resultados para Exponential Splines
Resumo:
Photocatalytic water splitting is a process which could potentially lead to commercially viable solar hydrogen production. This thesis uses an engineering perspective to investigate the technology. The effect of light intensity and temperature on photocatalytic water splitting was examined to evaluate the prospect of using solar concentration to increase the feasibility of the process. P25 TiO2 films deposited on conducting glass were used as photocatalyst electrodes and coupled with platinum electrodes which were also deposited on conducting glass. These films were used to form a photocatalysis cell and illuminated with a Xenon arc lamp to simulate solar light at intensities up to 50 suns. They were also tested at temperatures between 20°C and 100°C. The reaction demonstrated a sub-linear relationship with intensity. Photocurrent was proportional to intensity with an exponential value of 0.627. Increasing temperature resulted in an exponential relationship. This proved to follow an Arrhenius relationship with an activation energy of 10.3 kJ mol-1 and a pre-exponential factor of approximately 8.7×103. These results then formed the basis of a mathematical model which extrapolated beyond the range of the experimental tests. This model shows that the loss of efficiency from performing the reaction under high light intensity is offset by the increased reaction rate and efficiency from the associated temperature increase. This is an important finding for photocatalytic water splitting. It will direct future research in system design and materials research and may provide an avenue for the commercialisation of this technology.
Resumo:
Introduction: Previous studies investigating mothers’ sleep in the postpartum period commonly demonstrated elevated levels of sleepiness in this population. A Karolinska Sleepiness Scale (KSS) rating of 5 or above is associated with an exponential increase in vehicle crash risk. To date, no studies have investigated the relationship between mothers’ sleep in the postpartum period and their driving behaviour. Methods: Sleep-wake diary data was collected from 14 mother-infant dyads during two 7-day assessment periods when the infants were 6 and 12 weeks old. The mothers’ indicated all driving episodes during these weeks and their respective sleepiness level using the KSS. Semi-structured interviews were conducted with the mothers when their infant was 12 weeks old. Results: The infants slept significantly more than their mothers at 6 weeks and 12 weeks of age. During both time points, mothers and infants had a similar number of night awakenings (waking between 22:00 and 06:00), with some mothers experiencing greater than 19 awakenings over 7 nights. Notably, 36% of the mothers did not experience a continuous sleep period longer than 4.5 hours when their infant was 6 weeks old. A total of 141 driving episodes were reported during the 7 day assessment period when the infants were 6 weeks old. Over 50% of the driving episodes were denoted with a KSS score of 5 or above. Strategies mothers cited they employed during this period included only driving when feeling alert, postponing driving until another person is present, and driving in the morning when less sleepy. Conclusion: Mothers are experiencing disrupted sleep at night and some mothers do not obtain more than 4.5 hours of continuous sleep during the early postpartum weeks. In this sample, some mothers reported self-regulating driving behaviour, however over half of the driving episodes were undertaken with a sleepiness rating linked with elevated crash risk.
Resumo:
A new dualscale modelling approach is presented for simulating the drying of a wet hygroscopic porous material that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of wood at low temperatures and is valid in the so-called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradients of moisture content and temperature on the microscopic field using suitably-defined periodic boundary conditions, which allows the macroscopic mass and thermal fluxes to be defined as averages of the microscopic fluxes over the unit cell. This novel formulation accounts for the intricate coupling of heat and mass transfer at the microscopic scale but reduces to a classical homogenisation approach if a linear relationship is assumed between the microscopic gradient and flux. Simulation results for a sample of spruce wood highlight the potential and flexibility of the new dual-scale approach. In particular, for a given unit cell configuration it is not necessary to propose the form of the macroscopic fluxes prior to the simulations because these are determined as a direct result of the dual-scale formulation.
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
Attention Deficit Hyperactivity Disorder is a diagnostic term now indelibly scored on the public psyche. In some quarters, a diagnosis of “ADHD” is regarded with derision. In others it is welcomed with relief. Despite intense multi-disciplinary research, the jury is still out with regards to the “truth” of ADHD. Not surprisingly, the rapid increase in diagnosis over the past fifteen years, coupled with an exponential rise in the prescription of restricted class psychopharmaceuticals has stirred virulent debate. Provoking the most interest, it seems, are questions regarding causality. Typically, these revolve around possible antecedents for “disorderly” behaviour – bad food, bad tv and bad parents. Very seldom is the institution of schooling ever in the line of sight. To investigate this gap, I draw on Foucault to question what might be happening in schools and how this may be contributing to the definition, recognition and classification of particular children as a particular kind of “disorderly”.
Resumo:
In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Although there has been exponential growth in the number of studies of destination image appearing in the tourism literature, few have addressed the role of affective perceptions. This paper analyses the market positions held by a competitive set of destinations, through a comparison of cognitive, affective and conative perceptions. Cognitive perceptions were measured by trialling a factor analytic adaptation of importance-performance analysis. Affective perceptions were measured using an affective response grid. The alignment of the results from these techniques identified leadership positions held by two quite different destinations on two quite different dimensions of short break destination attractiveness.
Resumo:
Production of nanofibrous polyacrylonitrile/calcium carbonate (PAN/CaCO3) nanocomposite web was carried out through solution electrospinning process. Pore generating nanoparticles were leached from the PAN matrices in hydrochloric acid bath with the purpose of producing an ultimate nanoporous structure. The possible interaction between CaCO3 nanoparticles and PAN functional groups was investigated. Atomic absorption method was used to measure the amount of extracted CaCO3 nanoparticles. Morphological observation showed nanofibers of 270–720 nm in diameter containing nanopores of 50–130 nm. Monitoring the governing parameters statistically, it was found that the amount of extraction (ε) of CaCO3was increased when the web surface area (a) was broadened according to a simple scaling law (ε = 3.18 a0.4). The leaching process was maximized in the presence of 5% v/v of acid in the extraction bath and 5 wt % of CaCO3 in the polymer solution. Collateral effects of the extraction time and temperature showed exponential growth within a favorable extremum at 50°C for 72 h. Concentration of dimethylformamide as the solvent had no significant impact on the extraction level.
Resumo:
Objective To evaluate the time course of the recovery of transverse strain in the Achilles and patellar tendon following a bout of resistance exercise. Methods Seventeen healthy adults underwent sonographic examination of the right patellar (n=9) and Achilles (n=8) tendons immediately prior to and following 90 repetitions of weight-bearing quadriceps and gastrocnemius-resistance exercise performed against an effective resistance of 175% and 250% body weight, respectively. Sagittal tendon thickness was determined 20 mm from the enthesis and transverse strain, as defined by the stretch ratio, was repeatedly monitored over a 24 h recovery period. Results Resistance exercise resulted in an immediate decrease in Achilles (t7=10.6, p<0.01) and patellar (t8=8.9, p<0.01) tendon thickness, resulting in an average transverse stretch ratio of 0.86±0.04 and 0.82±0.05, which was not significantly different between tendons. The magnitude of the immediate transverse strain response, however, was reduced with advancing age (r=0.63, p<0.01). Recovery in transverse strain was prolonged compared with the duration of loading and exponential in nature. The average primary recovery time was not significantly different between the Achilles (6.5±3.2 h) and patellar (7.1±3.2 h) tendons. Body weight accounted for 62% and 64% of the variation in recovery time, respectively. Conclusions Despite structural and biochemical differences between the Achilles and patellar tendon, the mechanisms underlying transverse creep recovery in vivo appear similar and are highly time dependent. These novel findings have important implications concerning the time required for the mechanical recovery of high-stress tendons following an acute bout of exercise.
Resumo:
Technological growth in the 21st century is exponential. Simultaneously, development of the associated risk, uncertainty and user acceptance are scattered. This required appropriate study to establish people accepting controversial technology (PACT). The Internet and services around it, such as World Wide Web, e-mail, instant messaging and social networking are increasingly becoming important in many aspects of our lives. Information related to medical and personal health sharing using the Internet is controversial and demand validity, usability and acceptance. Whilst literature suggest, Internet enhances patients and physicians’ positive interactions some studies establish opposite of such interaction in particular the associated risk. In recent years Internet has attracted considerable attention as a means to improve health and health care delivery. However, it is not clear how widespread the use of Internet for health care really is or what impact it has on health care utilisation. Estimated impact of Internet usage varies widely from the locations locally and globally. As a result, an estimate (or predication) of Internet use and their effects in Medical Informatics related decision-making is impractical. This open up research issues on validating and accepting Internet usage when designing and developing appropriate policy and processes activities for Medical Informatics, Health Informatics and/or e-Health related protocols. Access and/or availability of data on Internet usage for Medical Informatics related activities are unfeasible. This paper presents a trend analysis of the growth of Internet usage in medical informatics related activities. In order to perform the analysis, data was extracted from ERA (Excellence Research in Australia) ranked “A” and “A*” Journal publications and reports from the authenticated public domain. The study is limited to the analyses of Internet usage trends in United States, Italy, France and Japan. Projected trends and their influence to the field of medical informatics is reviewed and discussed. The study clearly indicates a trend of patients becoming active consumers of health information rather than passive recipients.
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
Achilles tendinopathy is a common disorder involving physically active and sedentary individuals alike. Although the processes underlying its development are poorly understood, tendinopathy is widely regarded as an ‘overuse’ injury in which the tendon fails to adapt to prevalent loading conditions. Paradoxically, there is emerging evidence that heavy eccentric loading of the Achilles tendon may be an effective conservative approach for treatment of tendinopathy, with success rates of 60–80% reported. Interestingly, loading exercises involving other forms of muscle action, such as concentric activation, have been shown to be less effective treatment options. However, little is known about the acute response of tendon to exercise at present, and there are few plausible explanatory mechanisms for the observed beneficial effects of eccentric exercise, as opposed to other forms of strain stimuli. This paper presents the findings from a series of experiments undertaken to evaluate the effect of various strain stimuli on the time-dependent response of human Achilles tendon in vivo. It was shown for the first time, that heavy resistive ankle plantarflexion/ dorsiflexion exercises induced an immediate and significant decrease in Achilles tendon thickness (~15%). While thickness returned to pre-exercise levels within 24 hours, the recovery was exponential, with primary recovery occurring in less than 6 hours post-exercise. We proposed that such a diametral strain response with tensile loading reflects collagen realignment, Poison’s effects and radial extrusion of water from the tendon core. With unloading, the recovery of tendon dimensions likely reflects the re-diffusion of water via osmotic and/or inflammatory driven processes. Interestingly, prolonged walking was found to induce a similar diametral strain response. In subsequent studies, we demonstrated that eccentric exercise resulted in a greater reduction (-21%) in Achilles tendon thickness than isolated concentric exercise alone (-5%), despite a similar loading impulse. These novel findings, coupled with observations of a reduced diametral strain response with tendon pathology, highlight the importance of fluid movement to tendon function, nutrition and health. They also provide new insights into potential mechanisms underlying Achilles tendinopathy that impact rehabilitation strategies.
Resumo:
With an increased emphasis on genotyping of single nucleotide polymorphisms (SNPs) in disease association studies, the genotyping platform of choice is constantly evolving. In addition, the development of more specific SNP assays and appropriate genotype validation applications is becoming increasingly critical to elucidate ambiguous genotypes. In this study, we have used SNP specific Locked Nucleic Acid (LNA) hybridization probes on a real-time PCR platform to genotype an association cohort and propose three criteria to address ambiguous genotypes. Based on the kinetic properties of PCR amplification, the three criteria address PCR amplification efficiency, the net fluorescent difference between maximal and minimal fluorescent signals and the beginning of the exponential growth phase of the reaction. Initially observed SNP allelic discrimination curves were confirmed by DNA sequencing (n = 50) and application of our three genotype criteria corroborated both sequencing and observed real-time PCR results. In addition, the tested Caucasian association cohort was in Hardy-Weinberg equilibrium and observed allele frequencies were very similar to two independently tested Caucasian association cohorts for the same tested SNP. We present here a novel approach to effectively determine ambiguous genotypes generated from a real-time PCR platform. Application of our three novel criteria provides an easy to use semi-automated genotype confirmation protocol.
Resumo:
The growth of graphene on SiC/Si substrates is an appealing alternative to the growth on bulk SiC for cost reduction and to better integrate the material with Si based electronic devices. In this paper, we present a complete in-situ study of the growth of epitaxial graphene on 3C SiC (111)/Si (111) substrates via high temperature annealing (ranging from 1125˚C to 1375˚C) in ultra high vacuum (UHV). The quality and number of graphene layers have been thoroughly investigated by using x-ray photoelectron spectroscopy (XPS), while the surface characterization have been studied by scanning tunnelling microscopy (STM). Ex-situ Raman spectroscopy measurements confirm our findings, which demonstrate the exponential dependence of the number of graphene layer from the annealing temperature.