263 resultados para Melt quenching techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been known since Rhodes Fairbridge’s first attempt to establish a global pattern of Holocene sea-level change by combining evidence from Western Australia and from sites in the northern hemisphere that the details of sea-level history since the Last Glacial Maximum vary considerably across the globe. The Australian region is relatively stable tectonically and is situated in the ‘far-field’ of former ice sheets. It therefore preserves important records of post-glacial sea levels that are less complicated by neotectonics or glacio-isostatic adjustments. Accordingly, the relative sea-level record of this region is dominantly one of glacio-eustatic (ice equivalent) sea-level changes. The broader Australasian region has provided critical information on the nature of post-glacial sea level, including the termination of the Last Glacial Maximum when sea level was approximately 125 m lower than present around 21,000–19,000 years BP, and insights into meltwater pulse 1A between 14,600 and 14,300 cal. yr BP. Although most parts of the Australian continent reveals a high degree of tectonic stability, research conducted since the 1970s has shown that the timing and elevation of a Holocene highstand varies systematically around its margin. This is attributed primarily to variations in the timing of the response of the ocean basins and shallow continental shelves to the increased ocean volumes following ice-melt, including a process known as ocean siphoning (i.e. glacio-hydro-isostatic adjustment processes). Several seminal studies in the early 1980s produced important data sets from the Australasian region that have provided a solid foundation for more recent palaeo-sea-level research. This review revisits these key studies emphasising their continuing influence on Quaternary research and incorporates relatively recent investigations to interpret the nature of post-glacial sea-level change around Australia. These include a synthesis of research from the Northern Territory, Queensland, New South Wales, South Australia and Western Australia. A focus of these more recent studies has been the re-examination of: (1) the accuracy and reliability of different proxy sea-level indicators; (2) the rate and nature of post-glacial sea-level rise; (3) the evidence for timing, elevation, and duration of mid-Holocene highstands; and, (4) the notion of mid- to late Holocene sea-level oscillations, and their basis. Based on this synthesis of previous research, it is clear that estimates of past sea-surface elevation are a function of eustatic factors as well as morphodynamics of individual sites, the wide variety of proxy sea-level indicators used, their wide geographical range, and their indicative meaning. Some progress has been made in understanding the variability of the accuracy of proxy indicators in relation to their contemporary sea level, the inter-comparison of the variety of dating techniques used and the nuances of calibration of radiocarbon ages to sidereal years. These issues need to be thoroughly understood before proxy sea-level indicators can be incorporated into credible reconstructions of relative sea-level change at individual locations. Many of the issues, which challenged sea-level researchers in the latter part of the twentieth century, remain contentious today. Divergent opinions remain about: (1) exactly when sea level attained present levels following the most recent post-glacial marine transgression (PMT); (2) the elevation that sea-level reached during the Holocene sea-level highstand; (3) whether sea-level fell smoothly from a metre or more above its present level following the PMT; (4) whether sea level remained at these highstand levels for a considerable period before falling to its present position; or (5) whether it underwent a series of moderate oscillations during the Holocene highstand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feral pig, Sus scrofa, is a widespread and abundant invasive species in Australia. Feral pigs pose a significant threat to the environment, agricultural industry, and human health, and in far north Queensland they endanger World Heritage values of the Wet Tropics. Historical records document the first introduction of domestic pigs into Australia via European settlers in 1788 and subsequent introductions from Asia from 1827 onwards. Since this time, domestic pigs have been accidentally and deliberately released into the wild and significant feral pig populations have become established, resulting in the declaration of this species as a class 2 pest in Queensland. The overall objective of this study was to assess the population genetic structure of feral pigs in far north Queensland, in particular to enable delineation of demographically independent management units. The identification of ecologically meaningful management units using molecular techniques can assist in targeting feral pig control to bring about effective long-term management. Molecular genetic analysis was undertaken on 434 feral pigs from 35 localities between Tully and Innisfail. Seven polymorphic and unlinked microsatellite loci were screened and fixation indices (FST and analogues) and Bayesian clustering methods were used to identify population structure and management units in the study area. Sequencing of the hyper-variable mitochondrial control region (D-loop) of 35 feral pigs was also examined to identify pig ancestry. Three management units were identified in the study at a scale of 25 to 35 km. Even with the strong pattern of genetic structure identified in the study area, some evidence of long distance dispersal and/or translocation was found as a small number of individuals exhibited ancestry from a management unit outside of which they were sampled. Overall, gene flow in the study area was found to be influenced by environmental features such as topography and land use, but no distinct or obvious natural or anthropogenic geographic barriers were identified. Furthermore, strong evidence was found for non-random mating between pigs of European and Asian breeds indicating that feral pig ancestry influences their population genetic structure. Phylogenetic analysis revealed two distinct mitochondrial DNA clades, representing Asian domestic pig breeds and European breeds. A significant finding was that pigs of Asian origin living in Innisfail and south Tully were not mating randomly with European breed pigs populating the nearby Mission Beach area. Feral pig control should be implemented in each of the management units identified in this study. The control should be coordinated across properties within each management unit to prevent re-colonisation from adjacent localities. The adjacent rainforest and National Park Estates, as well as the rainforest-crop boundary should be included in a simultaneous control operation for greater success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is a common cause of neurological disability in young adults. The disease generally manifests in early to middle adulthood and causes various neurological deficits. Autoreactive T lymphocytes and their associated antigens have long been presumed important features of MS pathogenesis. The Protein tyrosine phosphatase receptor type C gene (PTPRC) encodes the T-cell receptor CD45. Variations within PTPRC have been previously associated with diseases of autoimmune origin such as type 1 diabetes mellitus and Graves' disease. We set out to investigate two variants within the PTPRC gene, C77G and C772T in subjects with MS and matched healthy controls to determine whether significant differences exist in these markers in an Australian population. We employed high resolution melt analysis (HRM) and restriction length polymorphism (RFLP) techniques to determine genotypic and allelic frequencies. Our study found no significant difference between frequencies for PTPRC C77G by either genotype (Χ2 = 0.65, P = 0.72) or allele (Χ2 = 0.48, P = 0.49). Similarly, we did not find evidence to suggest an association between PTPRC C772T by genotype (Χ2 = 1.06, P = 0.59) or allele (Χ2 = 0.20, P = 0.66). Linkage disequilibrium (LD) analysis showed strong linkage disequilibrium between the two tested markers (D' = 0.9970, SD = 0.0385). This study reveals no evidence to suggest that these markers are associated with MS in the tested Australian Caucasian population. Although the PTPRC gene has a significant role in regulating CD4+ and CD8+ autoreactive T-cells, interferon-beta responsiveness, and potentially other important processes, our study does not support a role for the two tested variants of this gene in MS susceptibility in the Australian population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—“Properties of Nanoparticle Populations” of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 “Techniques for characterizing size distribution of airborne nanoparticles”. Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2–46.6 nm and 80.2–89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel method was developed for studying the genetic relatedness of Pseudomonas aeruginosa isolates from clinical and environmental sources. This bacterium is ubiquitous in the natural environment and is an important pathogen known to infect Cystic Fibrosis (CF) patients. The transmission route of strains has not yet been defined; current theories include acquisition from an environmental source or through patient-to-patient spread. A highly discriminatory, bioinformatics based, DNA typing method was developed to investigate the relatedness of clinical and environmental isolates. This study found a similarity between the environmental and several CF clonal strains and also highlighted occurrence of environmental P. aeruginosa strains in CF infections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polycaprolactone (PCL) is a resorbable polymer used extensively in bone tissue engineering owing to good structural properties and processability. Strontium substituted bioactive glass (SrBG) has the ability to promote osteogenesis and may be incorporated into scaffolds intended for bone repair. Here we describe for the first time, the development of a PCL-SrBG composite scaffold incorporating 10% (weight) of SrBG particles into PCL bulk, produced by the technique of melt-electrospinning. We show that we are able to reproducibly manufacture composite scaffolds with an interconnected porous structure and, furthermore, these scaffolds were demonstrated to be non-cytotoxic in vitro. Ions present in the SrBG component were shown to dissolve into cell culture media and promoted precipitation of a calcium phosphate layer on the scaffold surface which in turn led to noticeably enhanced alkaline phosphatase activity in MC3T3-E1 cells compared to PLC-only scaffolds. These results suggest that melt-electrospun PCL-SrBG composite scaffolds show potential to become effective bone graft substitutes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A people-to-people matching system (or a match-making system) refers to a system in which users join with the objective of meeting other users with the common need. Some real-world examples of these systems are employer-employee (in job search networks), mentor-student (in university social networks), consume-to-consumer (in marketplaces) and male-female (in an online dating network). The network underlying in these systems consists of two groups of users, and the relationships between users need to be captured for developing an efficient match-making system. Most of the existing studies utilize information either about each of the users in isolation or their interaction separately, and develop recommender systems using the one form of information only. It is imperative to understand the linkages among the users in the network and use them in developing a match-making system. This study utilizes several social network analysis methods such as graph theory, small world phenomenon, centrality analysis, density analysis to gain insight into the entities and their relationships present in this network. This paper also proposes a new type of graph called “attributed bipartite graph”. By using these analyses and the proposed type of graph, an efficient hybrid recommender system is developed which generates recommendation for new users as well as shows improvement in accuracy over the baseline methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Y123 samples with varying amounts of added Y211, PtO 2 and CeO 2 have been melt processed and quenched from temperatures between 960°C and 1100°C. The microstructures of the quenched samples have been characterized using a combination of x-ray diffractometry, optical microscopy, scanning electron microscopy, microprobe analysis, energy-dispersive x-ray spectroscopy and wavelength-dispersive x-ray spectroscopy. The Ba-Cu-O-rich melt undergoes complex changes as a function of temperature and time. A region of stability of BaCuO 2 (BC1) and BaCu 2O 2 (BC2) exists below 1040°C in samples of Y123 + 20 mol% Y211. Ba 2Cu 3O 5 is stabilized by rapid quenching but appears to separate into BC1 and BC2 at lower quenching rates. PtO 2 and CeO 2 additions affect the distribution and volume fractions of the two Ba-Cu-oxide phases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Samples of YBa2Cu3O7-y + 20 mol% Y2BaCuO5 have been melt processed and quenched from temperatures ranging from 975 to 1100°C. The microstructure of the samples have been characterized via a combination of x-ray diffractometry, optical microscopy, scanning electron microscopy, energy dispersive x-ray spectrometry and wavelength dispersive x-ray spectrometry. BaCuO2 (BC1) and BaCu2O2 (BC2) crystallize from the melt of samples quenched from temperatures between 985 and 1100°C in air. The average yttrium content differs for BC1 and BC2, and it is 4.3 and 5.1 at.%, respectively. Holding times of 20 hours at temperatures above or equal to 1040°C give rise to a dendritic pattern of BC1 surrounded by BC2. The complex changes of the nature of the melt as a function of temperature and time are likely to play a significant role in the mechanism of melt texturing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using cooperative learning in classrooms promotes academic achievement, communication skills, problem-solving, social skills and student motivation. Yet it is reported that cooperative learning as a Western educational concept may be ineffective in Asian cultural contexts. The study aims to investigate the utilisation of scaffolding techniques for cooperative learning in Thailand primary mathematics classes. A teacher training program was designed to foster Thai primary school teachers’ cooperative learning implementation. Two teachers participated in this experimental program for one and a half weeks and then implemented cooperative learning strategies in their mathematics classes for six weeks. The data collected from teacher interviews and classroom observations indicates that the difficulty or failure of implementing cooperative learning in Thailand education may not be directly derived from cultural differences. Instead, it does indicate that Thai culture can be constructively merged with cooperative learning through a teacher training program and practices of scaffolding techniques.