332 resultados para decomposition techniques
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.
Resumo:
Plug-in electric vehicles (PEVs) are increasingly popular in the global trend of energy saving and environmental protection. However, the uncoordinated charging of numerous PEVs can produce significant negative impacts on the secure and economic operation of the power system concerned. In this context, a hierarchical decomposition approach is presented to coordinate the charging/discharging behaviors of PEVs. The major objective of the upper-level model is to minimize the total cost of system operation by jointly dispatching generators and electric vehicle aggregators (EVAs). On the other hand, the lower-level model aims at strictly following the dispatching instructions from the upper-level decision-maker by designing appropriate charging/discharging strategies for each individual PEV in a specified dispatching period. Two highly efficient commercial solvers, namely AMPL/IPOPT and AMPL/CPLEX, respectively, are used to solve the developed hierarchical decomposition model. Finally, a modified IEEE 118-bus testing system including 6 EVAs is employed to demonstrate the performance of the developed model and method.
Resumo:
The composition of a series of hydroxycarbonate precursors to copper/zinc oxide methanol synthesis catalysts prepared under conditions reported as optimum for catalytic activity has been studied. Techniques employed included thermogravimetry (TG), temperature-programmed decomposition (TPD), X-ray diffraction (XRD), high-resolution transmission electron microscopy (HRTEM), and Raman and FTIR spectroscopies. Evidence was obtained for various structural phases including hydrozincite, copper hydrozincite, aurichalcite, zincian malachite and malachite (the concentrations of which depended upon the exact Cu/Zn ratio used). Significantly, previously reported phases such as gerhardite and rosasite were not identified when catalysts were synthesized at optimum solution pH and temperature values, and after appropriate aging periods. Calcination of the hydroxycarbonate precursors resulted in the formation of catalysts containing an intimate mixture of copper and zinc oxides. Temperature-programmed reduction (TPR) revealed that a number of discrete copper oxide species were present in the catalyst, the precise concentrations of which were determined to be related to the structure of the catalyst precursor. Copper hydrozincite decomposed to give zinc oxide particles decorated by highly dispersed, small copper oxide species. Aurichalcite appeared to result ultimately in the most intimately mixed catalyst structure whereas zincian malachite decomposed to produce larger copper oxide and zinc oxide grains. The reason for the stabilization of small copper oxide and zinc oxide clusters by aurichalcite was investigated by using carefully selected calcination temperatures. It was concluded that the unique formation of an 'anion-modified' oxide resulting from the initial decomposition stage of aurichalcite was responsible for the 'binding' of copper species to zinc moieties.
Resumo:
The techniques of environmental scanning electron microscopy (ESEM) and Raman microscopy have been used to respectively elucidate the morphological changes and nature of the adsorbed species on silver(I) oxide powder, during methanol oxidation conditions. Heating Ag2O in either water vapour or oxygen resulted firstly in the decomposition of silver(I) oxide to polycrystalline silver at 578 K followed by sintering of the particles at higher temperature. Raman spectroscopy revealed the presence of subsurface oxygen and hydroxyl species in addition to surface hydroxyl groups after interaction with water vapour. Similar species were identified following exposure to oxygen in an ambient atmosphere. This behaviour indicated that the polycrystalline silver formed from Ag2O decomposition was substantially more reactive than silver produced by electrochemical methods. The interaction of water at elevated temperatures subsequent to heating silver(I) oxide in oxygen resulted in a significantly enhanced concentration of subsurface hydroxyl species. The reaction of methanol with Ag2O at high temperatures was interesting in that an inhibition in silver grain growth was noted. Substantial structural modification of the silver(I) oxide material was induced by catalytic etching in a methanol/air mixture. In particular, "pin-hole" formation was observed to occur at temperatures in excess of 773 K, and it was also recorded that these "pin- holes" coalesced to form large-scale defects under typical industrial reaction conditions. Raman spectroscopy revealed that the working surface consisted mainly of subsurface oxygen and surface Ag=O species. The relative lack of sub-surface hydroxyl species suggested that it was the desorption of such moieties which was the cause of the "pin-hole" formation.
Resumo:
The paper utilises the Juhn Murphy and Pierce (1991) decomposition to shed light on the pattern of slow male-female wage convergance in Australia over the 1980s. The analysis allows one to distinguish between the role of wage structure and genderspecific effects. The central question addressed is whether rising wage inequality counteracted the forces of increased female investment in labour market skills, i.e. education and experience. The conclusion is that in contrast to the US and the UK, Australian women do not appear to have been swimming against a tide of adverse wage structure changes.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.
Resumo:
Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—“Properties of Nanoparticle Populations” of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 “Techniques for characterizing size distribution of airborne nanoparticles”. Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2–46.6 nm and 80.2–89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.
Resumo:
A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.
Resumo:
A people-to-people matching system (or a match-making system) refers to a system in which users join with the objective of meeting other users with the common need. Some real-world examples of these systems are employer-employee (in job search networks), mentor-student (in university social networks), consume-to-consumer (in marketplaces) and male-female (in an online dating network). The network underlying in these systems consists of two groups of users, and the relationships between users need to be captured for developing an efficient match-making system. Most of the existing studies utilize information either about each of the users in isolation or their interaction separately, and develop recommender systems using the one form of information only. It is imperative to understand the linkages among the users in the network and use them in developing a match-making system. This study utilizes several social network analysis methods such as graph theory, small world phenomenon, centrality analysis, density analysis to gain insight into the entities and their relationships present in this network. This paper also proposes a new type of graph called “attributed bipartite graph”. By using these analyses and the proposed type of graph, an efficient hybrid recommender system is developed which generates recommendation for new users as well as shows improvement in accuracy over the baseline methods.
Resumo:
Objective To describe the trend of overall mortality and major causes of death in Shandong population from 1970 to 2005,and to quantitatively estimate the influential factors. Methods Trends of overall mortality and major causes of death were described using indicators such as mortality rates and age-adjusted death rates by comparing three large-scale mortality surveys in Shandong province. Difference decomposing method was applied to estimate the contribution of demographic and non-demographic factors for the change of mortality. Results The total mortality had had a slight change since 1970s,but had increased since 1990s.However,both the mortality rates of age-adjusted and age-specific decreased significantly. The mortality of Group Ⅰ diseases including infectious diseases as well maternal and perinatal diseases decreased drastically. By contrast, the mortality of non-communicable chronic diseases (NCDs)including cardiovascular diseases(CVDs),cancer and injuries increased. The sustentation of recent overall mortality was caused by the interaction of demographic and non-demographic factors which worked oppositely. Non-demographic factors were responsible for the decrease of Group Ⅰ disease and the increase of injuries. With respect to the increase of NCDs as a whole. Demographic factors might take the full responsibility and the non-demographic factors were the opposite force to reduce the mortality. Nevertheless, for the increase of some leading NCD diseases as CVDs and cancer, the increase was mainly due to non-demographic rather than demographic factors. Conclusion Through the interaction of the aggravation of ageing population and the enhancement of non-demographic effect, the overall mortality in Shandong would maintain a balance or slightly rise in the coming years. Group Ⅰ diseases in Shandong had been effectively under control. Strategies focusing on disease control and prevention should be transferred to chronic diseases, especially leading NCDs, such as CVDs and cancer.
Resumo:
Using cooperative learning in classrooms promotes academic achievement, communication skills, problem-solving, social skills and student motivation. Yet it is reported that cooperative learning as a Western educational concept may be ineffective in Asian cultural contexts. The study aims to investigate the utilisation of scaffolding techniques for cooperative learning in Thailand primary mathematics classes. A teacher training program was designed to foster Thai primary school teachers’ cooperative learning implementation. Two teachers participated in this experimental program for one and a half weeks and then implemented cooperative learning strategies in their mathematics classes for six weeks. The data collected from teacher interviews and classroom observations indicates that the difficulty or failure of implementing cooperative learning in Thailand education may not be directly derived from cultural differences. Instead, it does indicate that Thai culture can be constructively merged with cooperative learning through a teacher training program and practices of scaffolding techniques.
Resumo:
Taguchi method is for the first time applied to optimize the synthesis of graphene films by copper-catalyzed decomposition of ethanol. In order to find the most appropriate experimental conditions for the realization of thin high-grade films, six experiments suitably designed and performed. The influence of temperature (1000–1070 °C) and synthesis duration (1–30 min) and hydrogen flow (0–100 sccm) on the number of graphene layers and defect density in the graphitic lattice was ranked by monitoring the intensity of the 2D- and D-bands relative to the G-band in the Raman spectra. After critical examination and adjusting of the conditions predicted to give optimal results, a continuous film consisting of 2–4 nearly defect-free graphene layers was obtained.
Resumo:
Airport efficiency is important because it has a direct impact on customer safety and satisfaction and therefore the financial performance and sustainability of airports, airlines, and affiliated service providers. This is especially so in a world characterized by an increasing volume of both domestic and international air travel, price and other forms of competition between rival airports, airport hubs and airlines, and rapid and sometimes unexpected changes in airline routes and carriers. It also reflects expansion in the number of airports handling regional, national, and international traffic and the growth of complementary airport facilities including industrial, commercial, and retail premises. This has fostered a steadily increasing volume of research aimed at modeling and providing best-practice measures and estimates of airport efficiency using mathematical and econometric frontiers. The purpose of this chapter is to review these various methods as they apply to airports throughout the world. Apart from discussing the strengths and weaknesses of the different approaches and their key findings, the paper also examines the steps faced by researchers as they move through the modeling process in defining airport inputs and outputs and the purported efficiency drivers. Accordingly, the chapter provides guidance to those conducting empirical research on airport efficiency and serves as an aid for aviation regulators and airport operators among others interpreting airport efficiency research outcomes.
Resumo:
A series of NR composites filled with modified kaolinite (MK), carbon black (CB) and the hybrid fillercontained MK and CB, were prepared by melt blending. The microstructure, combustion and thermaldecomposition behaviors of NR composites were characterized by TEM, XRD, infrared spectroscopy, conecalorimeter test (CCT) and thermal-gravimetric analysis (TG). The results show that the filler hybridizationcan improve the dispensability and shape of the kaolinite sheets in the rubber matrix and change theinterface bond between kaolinite particles and rubber molecules. NR-3 filled by 10 phr MK and 40 phr CBhas the lowest heat release rate (HRR), mass loss rate (MLR), total heat release (THR), smoke productionrate (SPR) and the highest char residue among all the NR composites. Therefore, the hybridization ofthe carbon black particles with the kaolinite particles can effectively improve the thermal stability andcombustion properties of NR composites.