251 resultados para Saturated throughput


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human hair fibres are ubiquitous in nature and are found frequently at crime scenes often as a result of exchange between the perpetrator, victim and/or the surroundings according to Locard's Principle. Therefore, hair fibre evidence can provide important information for crime investigation. For human hair evidence, the current forensic methods of analysis rely on comparisons of either hair morphology by microscopic examination or nuclear and mitochondrial DNA analyses. Unfortunately in some instances the utilisation of microscopy and DNA analyses are difficult and often not feasible. This dissertation is arguably the first comprehensive investigation aimed to compare, classify and identify the single human scalp hair fibres with the aid of FTIR-ATR spectroscopy in a forensic context. Spectra were collected from the hair of 66 subjects of Asian, Caucasian and African (i.e. African-type). The fibres ranged from untreated to variously mildly and heavily cosmetically treated hairs. The collected spectra reflected the physical and chemical nature of a hair from the near-surface particularly, the cuticle layer. In total, 550 spectra were acquired and processed to construct a relatively large database. To assist with the interpretation of the complex spectra from various types of human hair, Derivative Spectroscopy and Chemometric methods such as Principal Component Analysis (PCA), Fuzzy Clustering (FC) and Multi-Criteria Decision Making (MCDM) program; Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Geometrical Analysis for Interactive Aid (GAIA); were utilised. FTIR-ATR spectroscopy had two important advantages over to previous methods: (i) sample throughput and spectral collection were significantly improved (no physical flattening or microscope manipulations), and (ii) given the recent advances in FTIR-ATR instrument portability, there is real potential to transfer this work.s findings seamlessly to on-field applications. The "raw" spectra, spectral subtractions and second derivative spectra were compared to demonstrate the subtle differences in human hair. SEM images were used as corroborative evidence to demonstrate the surface topography of hair. It indicated that the condition of the cuticle surface could be of three types: untreated, mildly treated and treated hair. Extensive studies of potential spectral band regions responsible for matching and discrimination of various types of hair samples suggested the 1690-1500 cm-1 IR spectral region was to be preferred in comparison with the commonly used 1750-800 cm-1. The principal reason was the presence of the highly variable spectral profiles of cystine oxidation products (1200-1000 cm-1), which contributed significantly to spectral scatter and hence, poor hair sample matching. In the preferred 1690-1500 cm-1 region, conformational changes in the keratin protein attributed to the α-helical to β-sheet transitions in the Amide I and Amide II vibrations and played a significant role in matching and discrimination of the spectra and hence, the hair fibre samples. For gender comparison, the Amide II band is significant for differentiation. The results illustrated that the male hair spectra exhibit a more intense β-sheet vibration in the Amide II band at approximately 1511 cm-1 whilst the female hair spectra displayed more intense α-helical vibration at 1520-1515cm-1. In terms of chemical composition, female hair spectra exhibit greater intensity of the amino acid tryptophan (1554 cm-1), aspartic and glutamic acid (1577 cm-1). It was also observed that for the separation of samples based on racial differences, untreated Caucasian hair was discriminated from Asian hair as a result of having higher levels of the amino acid cystine and cysteic acid. However, when mildly or chemically treated, Asian and Caucasian hair fibres are similar, whereas African-type hair fibres are different. In terms of the investigation's novel contribution to the field of forensic science, it has allowed for the development of a novel, multifaceted, methodical protocol where previously none had existed. The protocol is a systematic method to rapidly investigate unknown or questioned single human hair FTIR-ATR spectra from different genders and racial origin, including fibres of different cosmetic treatments. Unknown or questioned spectra are first separated on the basis of chemical treatment i.e. untreated, mildly treated or chemically treated, genders, and racial origin i.e. Asian, Caucasian and African-type. The methodology has the potential to complement the current forensic analysis methods of fibre evidence (i.e. Microscopy and DNA), providing information on the morphological, genetic and structural levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Both sorghum (Sorghum bicolor) and sugarcane (Saccharum officinarum) are members of the Andropogoneae tribe in the Poaceae and are each other's closest relatives amongst cultivated plants. Both are relatively recent domesticates and comparatively little of the genetic potential of these taxa and their wild relatives has been captured by breeding programmes to date. This review assesses the genetic gains made by plant breeders since domestication and the progress in the characterization of genetic resources and their utilization in crop improvement for these two related species. Genetic Resources The genome of sorghum has recently been sequenced providing a great boost to our knowledge of the evolution of grass genomes and the wealth of diversity within S. bicolor taxa. Molecular analysis of the Sorghum genus has identified close relatives of S. bicolor with novel traits, endosperm structure and composition that may be used to expand the cultivated gene pool. Mutant populations (including TILLING populations) provide a useful addition to genetic resources for this species. Sugarcane is a complex polyploid with a large and variable number of copies of each gene. The wild relatives of sugarcane represent a reservoir of genetic diversity for use in sugarcane improvement. Techniques for quantitative molecular analysis of gene or allele copy number in this genetically complex crop have been developed. SNP discovery and mapping in sugarcane has been advanced by the development of high-throughput techniques for ecoTILLING in sugarcane. Genetic linkage maps of the sugarcane genome are being improved for use in breeding selection. The improvement of both sorghum and sugarcane will be accelerated by the incorporation of more diverse germplasm into the domesticated gene pools using molecular tools and the improved knowledge of these genomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although germline mutations in CDKN2A are present in approximately 25% of large multicase melanoma families, germline mutations are much rarer in the smaller melanoma families that make up most individuals reporting a family history of this disease. In addition, only three families worldwide have been reported with germline mutations in a gene other than CDKN2A (i.e., CDK4). Accordingly, current genomewide scans underway at the National Human Genome Research Institute hope to reveal linkage to one or more chromosomal regions, and ultimately lead to the identification of novel genes involved in melanoma predisposition. Both CDKN2A and PTEN have been identified as genes involved in sporadic melanoma development; however, mutations are more common in cell lines than uncultured tumors. A combination of cytogenetic, molecular, and functional studies suggests that additional genes involved in melanoma development are located to chromosomal regions 1p, 6q, 7p, 11q, and possibly also 9p and 10q. With the near completion of the human genome sequencing effort, combined with the advent of high throughput mutation analyses and new techniques including cDNA and tissue microarrays, the identification and characterization of additional genes involved in melanoma pathogenesis seem likely in the near future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Popular wireless networks, such as IEEE 802.11/15/16, are not designed for real-time applications. Thus, supporting real-time quality of service (QoS) in wireless real-time control is challenging. This paper adopts the widely used IEEE 802.11, with the focus on its distributed coordination function (DCF), for soft-real-time control systems. The concept of the critical real-time traffic condition is introduced to characterize the marginal satisfaction of real-time requirements. Then, mathematical models are developed to describe the dynamics of DCF based real-time control networks with periodic traffic, a unique feature of control systems. Performance indices such as throughput and packet delay are evaluated using the developed models, particularly under the critical real-time traffic condition. Finally, the proposed modelling is applied to traffic rate control for cross-layer networked control system design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microbial pollution in water periodically affects human health in Australia, particularly in times of drought and flood. There is an increasing need for the control of waterborn microbial pathogens. Methods, allowing the determination of the origin of faecal contamination in water, are generally referred to as Microbial Source Tracking (MST). Various approaches have been evaluated as indicatorsof microbial pathogens in water samples, including detection of different microorganisms and various host-specific markers. However, until today there have been no universal MST methods that could reliably determine the source (human or animal) of faecal contamination. Therefore, the use of multiple approaches is frequently advised. MST is currently recognised as a research tool, rather than something to be included in routine practices. The main focus of this research was to develop novel and universally applicable methods to meet the demands for MST methods in routine testing of water samples. Escherichia coli was chosen initially as the object organism for our studies as, historically and globally, it is the standard indicator of microbial contamination in water. In this thesis, three approaches are described: single nucleotide polymorphism (SNP) genotyping, clustered regularly interspaced short palindromic repeats (CRISPR) screening using high resolution melt analysis (HRMA) methods and phage detection development based on CRISPR types. The advantage of the combination SNP genotyping and CRISPR genes has been discussed in this study. For the first time, a highly discriminatory single nucleotide polymorphism interrogation of E. coli population was applied to identify the host-specific cluster. Six human and one animal-specific SNP profile were revealed. SNP genotyping was successfully applied in the field investigations of the Coomera watershed, South-East Queensland, Australia. Four human profiles [11], [29], [32] and [45] and animal specific SNP profile [7] were detected in water. Two human-specific profiles [29] and [11] were found to be prevalent in the samples over a time period of years. The rainfall (24 and 72 hours), tide height and time, general land use (rural, suburban), seasons, distance from the river mouth and salinity show a lack of relashionship with the diversity of SNP profiles present in the Coomera watershed (p values > 0.05). Nevertheless, SNP genotyping method is able to identify and distinquish between human- and non-human specific E. coli isolates in water sources within one day. In some samples, only mixed profiles were detected. To further investigate host-specificity in these mixed profiles CRISPR screening protocol was developed, to be used on the set of E. coli, previously analysed for SNP profiles. CRISPR loci, which are the pattern of previous DNA coliphages attacks, were considered to be a promising tool for detecting host-specific markers in E. coli. Spacers in CRISPR loci could also reveal the dynamics of virulence in E. coli as well in other pathogens in water. Despite the fact that host-specificity was not observed in the set of E. coli analysed, CRISPR alleles were shown to be useful in detection of the geographical site of sources. HRMA allows determination of ‘different’ and ‘same’ CRISPR alleles and can be introduced in water monitoring as a cost-effective and rapid method. Overall, we show that the identified human specific SNP profiles [11], [29], [32] and [45] can be useful as marker genotypes globally for identification of human faecal contamination in water. Developed in the current study, the SNP typing approach can be used in water monitoring laboratories as an inexpensive, high-throughput and easy adapted protocol. The unique approach based on E. coli spacers for the search for unknown phage was developed to examine the host-specifity in phage sequences. Preliminary experiments on the recombinant plasmids showed the possibility of using this method for recovering phage sequences. Future studies will determine the host-specificity of DNA phage genotyping as soon as first reliable sequences can be acquired. No doubt, only implication of multiple approaches in MST will allow identification of the character of microbial contamination with higher confidence and readability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural convection flow in a two-dimensional fluid saturated porous enclosure with localized heating from below, symmetrical cooling from the sides and the top and rest of the bottom walls are insulated, has been investigated numerically. Darcy’s law for porous media along with the energy equation based on the 1st law of thermodynamics has been considered. Implicit finite volume method with TDMA solver is used to solve the governing equations. Localized heating is simulated by a centrally located isothermal heat source on the bottom wall, and four different values of the dimensionless heat source length, 1/5, 2/5, 3/5 and 4/5 are considered. The effect of heat source length and the Rayleigh number on streamlines and isotherms are presented, as well as the variation of the local rate of heat transfer in terms of the local Nusselt number from the heated wall. Finally, the average Nusselt number at the heated part of the bottom wall has been shown against Rayleigh number for the non-dimensional heat source length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a ‘turn-key’ solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Disposal of mud and ash, particularly in wet weather conditions, is a significant expense for mills. This paper reports on one part of a process to pelletise mud and ash, aimed at making mud and ash more attractive to growers across entire mill districts. The full process is described in a separate paper. The part described in this paper involves re-constituting mud cake from the filter station at Tully Mill and processing it in a decanter centrifuge. The material produced by re-constituting and centrifuging is drier and made up of separate particles. The material needs to mix easily with boiler ash, and the mixture needs to be fed easily into a flue gas drier to be dried to low moisture. The results achieved with the particular characteristics of Tully Mill rotary vacuum filter cake are presented. It was found that an internal rotor with a 20º beach was not adequate to process re-constituted rotary vacuum filter mud. A rotor with a 10º beach worked much more successfully. A total of four tonnes of centrifuged mud with a moisture content ranging from 60% to 65% was produced. It was found that the torque, flocculant rate and dose rate had a statistically significant effect on the moisture content. Feed rate did not have a noticeable impact on the moisture content by itself but torque had a much larger impact on the moisture content at the low feed rate than at the high feed rate. These results indicated that the moisture content of the mud can most likely be reduced with low feed rate, low flocculant rate, high dose rate and high torque. One issue that is believed to affect the operation of a decanter centrifuge was the large quantity of long bagasse fibres in the rotary vacuum filter mud. It is likely that the long fibres limited the throughput of the centrifuge and the moisture achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diversity techniques have long been used to combat the channel fading in wireless communications systems. Recently cooperative communications has attracted lot of attention due to many benefits it offers. Thus cooperative routing protocols with diversity transmission can be developed to exploit the random nature of the wireless channels to improve the network efficiency by selecting multiple cooperative nodes to forward data. In this paper we analyze and evaluate the performance of a novel routing protocol with multiple cooperative nodes which share multiple channels. Multiple shared channels cooperative (MSCC) routing protocol achieves diversity advantage by using cooperative transmission. It unites clustering hierarchy with a bandwidth reuse scheme to mitigate the co-channel interference. Theoretical analysis of average packet reception rate and network throughput of the MSCC protocol are presented and compared with simulated results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Hospital EDs are a significant and high-profile component of Australia’s health-care system, which in recent years have experienced considerable crowding. This crowding is caused by the combination of increasing demand, throughput and output factors. The aim of the present article is to clarify trends in the use of public ED services across Australia with a view to providing an evidence basis for future policy analysis and discussion. Methods: The data for the present article have been extracted, compiled and analysed from publicly available sources for a 10 year period between 2000–2001 and 2009–2010. Results: Demand for public ED care increased by 37% over the decade, an average annual increase of 1.8% in the utilization rate per 1000 persons. There were significant differences in utilization rates and in trends in growth among states and territories that do not easily relate to general population trends alone. Conclusions: This growth in demand exceeds general population growth, and the variability between states both in utilization rates and overall trends defies immediate explanation. The growth in demand for ED services is a partial contributor to the crowding being experienced in EDs across Australia. There is a need for more detailed study, including qualitative analysis of patient motivations in order to identify the factors driving this growth in demand.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing algebraic analyses of the ZUC cipher indicate that the cipher should be secure against algebraic attacks. In this paper, we present an alternative algebraic analysis method for the ZUC stream cipher, where a combiner is used to represent the nonlinear function and to derive equations representing the cipher. Using this approach, the initial states of ZUC can be recovered from 2^97 observed words of keystream, with a complexity of 2^282 operations. This method is more successful when applied to a modified version of ZUC, where the number of output words per clock is increased. If the cipher outputs 120 bits of keystream per clock, the attack can succeed with 219 observed keystream bits and 2^47 operations. Therefore, the security of ZUC against algebraic attack could be significantly reduced if its throughput was to be increased for efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The surface amorphous layer of articular cartilage is of primary importance to its load-bearing and lubrication function. This lipid-filled layer is degraded/disrupted or eliminated when cartilage degenerates due to diseases. This article examines further the characteristic of this surface overlay using a combination of microscopy and imaging methods to evaluate the hypothesis that the surface of articular cartilage can be repaired by exposing degraded cartilage to aqueous synthetic lipid mixtures. The preliminary results demonstrate that it is possible to create a new surface layer of phospholipids on the surface of cartilage following artificial lipid removal, but such a layer does not possess enough mechanical strength for physiological function when created with either unsaturated palmitoyloleoyl- phosphatidylcholine or saturated dipalmitoyl-phosphatidylcholine component of joint lipid composition alone. We conclude that this may be due to low structural cohesivity, inadequate time of exposure, and the mix/content of lipid in the incubation environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated potential palaeoclimate proxies provided by rare earth element (REE) geochemistry in speleothems and in clay mineralogy of cave sediments. Speleothem and sediment samples were collected from a series of cave fill deposits that occurred with rich vertebrate fossil assemblages in and around Mount Etna National Park, Rockhampton (central coastal Queensland). The fossil deposits range from Plio- Pleistocene to Holocene in age (based on uranium/thorium dating) and appear to represent depositional environments ranging from enclosed rainforest to semi-arid grasslands. Therefore, the Mount Etna cave deposits offer the perfect opportunity to test new palaeoclimate tools as they include deposits that span a known significant climate shift on the basis of independent faunal data. The first section of this study investigates the REE distribution of the host limestone to provide baseline geochemistry for subsequent speleothem investigations. The Devonian Mount Etna Beds were found to be more complex than previous literature had documented. The studied limestone massif is overturned, highly recrystallised in parts and consists of numerous allochthonous blocks with different spatial orientations. Despite the complex geologic history of the Mount Etna Beds, Devonian seawater-like REE patterns were recovered in some parts of the limestone and baseline geochemistry was determined for the bulk limestone for comparison with speleothem REE patterns. The second part of the study focused on REE distribution in the karst system and the palaeoclimatic implications of such records. It was found that REEs have a high affinity for calcite surfaces and that REE distributions in speleothems vary between growth bands much more than along growth bands, thus providing a temporal record that may relate to environmental changes. The morphology of different speleothems (i.e., stalactites, stalagmites, and flowstones) has little bearing on REE distributions provided they are not contaminated with particulate fines. Thus, baseline knowledge developed in the study suggested that speleothems were basically comparable for assessing palaeoclimatically controlled variations in REE distributions. Speleothems from rainforest and semi-arid phases were compared and it was found that there are definable differences in REE distribution that can be attributed to climate. In particular during semiarid phases, total REE concentration decreased, LREE became more depleted, Y/Ho increased, La anomalies were more positive and Ce anomalies were more negative. This may reflect more soil development during rainforest phases and more organic particles and colloids, which are known to transport REEs, in karst waters. However, on a finer temporal scale (i.e. growth bands) within speleothems from the same climate regime, no difference was seen. It is suggested that this may be due to inadequate time for soil development changes on the time frames represented by differences in growth band density. The third part of the study was a reconnaissance investigation focused on mineralogy of clay cave sediments, illite/kaolinite ratios in particular, and the potential palaeoclimatic implications of such records. Although the sample distribution was not optimal, the preliminary results suggest that the illite/kaolinite ratio increased during cold and dry intervals, consistent with decreased chemical weathering during those times. The study provides a basic framework for future studies at differing latitudes to further constrain the parameters of the proxy. The identification of such a proxy recorded in cave sediment has broad implications as clay ratios could potentially provide a basic local climate proxy in the absence of fossil faunas and speleothem material. This study suggests that REEs distributed in speleothems may provide information about water throughput and soil formation, thus providing a potential palaeoclimate proxy. It highlights the importance of understanding the host limestone geochemistry and broadens the distribution and potential number of cave field sites as palaeoclimate information no longer relies solely on the presence of fossil faunas and or speleothems. However, additional research is required to better understand the temporal scales required for the proxies to be recognised.