995 resultados para Split-operator Methods
Resumo:
An investigation was undertaken to test the effectiveness of two procedures for recording boundaries and plot positions for scientific studies on farms on Leyte Island, the Philippines. The accuracy of a Garmin 76 Global Positioning System (GPS) unit and a compass and chain was checked under the same conditions. Tree canopies interfered with the ability of the satellite signal to reach the GPS and therefore the GPS survey was less accurate than the compass and chain survey. Where a high degree of accuracy is required, a compass and chain survey remains the most effective method of surveying land underneath tree canopies, providing operator error is minimised. For a large number of surveys and thus large amounts of data, a GPS is more appropriate than a compass and chain survey because data are easily up-loaded into a Geographic Information System (GIS). However, under dense canopies where satellite signals cannot reach the GPS, it may be necessary to revert to a compass survey or a combination of both methods.
Resumo:
This is the first in a series of three articles which aimed to derive the matrix elements of the U(2n) generators in a multishell spin-orbit basis. This is a basis appropriate to many-electron systems which have a natural partitioning of the orbital space and where also spin-dependent terms are included in the Hamiltonian. The method is based on a new spin-dependent unitary group approach to the many-electron correlation problem due to Gould and Paldus [M. D. Gould and J. Paldus, J. Chem. Phys. 92, 7394, (1990)]. In this approach, the matrix elements of the U(2n) generators in the U(n) x U(2)-adapted electronic Gelfand basis are determined by the matrix elements of a single Ll(n) adjoint tensor operator called the del-operator, denoted by Delta(j)(i) (1 less than or equal to i, j less than or equal to n). Delta or del is a polynomial of degree two in the U(n) matrix E = [E-j(i)]. The approach of Gould and Paldus is based on the transformation properties of the U(2n) generators as an adjoint tensor operator of U(n) x U(2) and application of the Wigner-Eckart theorem. Hence, to generalize this approach, we need to obtain formulas for the complete set of adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis. The nonzero shift coefficients are uniquely determined and may he evaluated by the methods of Gould et al. [see the above reference]. In this article, we define zero-shift adjoint coupling coefficients for the two-shell composite Gelfand-Paldus basis which are appropriate to the many-electron problem. By definition, these are proportional to the corresponding two-shell del-operator matrix elements, and it is shown that the Racah factorization lemma applies. Formulas for these coefficients are then obtained by application of the Racah factorization lemma. The zero-shift adjoint reduced Wigner coefficients required for this procedure are evaluated first. All these coefficients are needed later for the multishell case, which leads directly to the two-shell del-operator matrix elements. Finally, we discuss an application to charge and spin densities in a two-shell molecular system. (C) 1998 John Wiley & Sons.
Resumo:
Objectives: Severe glottic/subglottic stenosis (complex laryngotracheal stenosis) is a rare but challenging complication of endotracheal intubation. Laryngotracheal reconstruction with cartilage graft and an intralaryngeal stent is a procedure described for complex laryngotracheal stenosis management in children; however, for adults, few options remain. Our aim was to analyze the results of laryngotracheal reconstruction as a treatment for complex laryngotracheal stenosis in adults, considering postoperative and long-term outcome. Methods: Laryngotracheal reconstruction (laryngeal split with anterior and posterior interposition of a rib cartilage graft) has been used in our institution to manage glottic/subglottic stenosis restricted to the larynx; laryngotracheal reconstruction associated with cricotracheal resection has been used to treat glottic/subglottic/upper tracheal stenosis (extending beyond the second tracheal ring). A retrospective study was conducted, including all patients with complex laryngotracheal stenosis treated surgically in our institution from January of 2002 until December of 2005. Results: Twenty patients (10 male and 10 female patients; average age, 36.13 years; age range, 18-54 years) were included. There were no deaths, and the postoperative complications were as follows: dysphonia, 25%; subcutaneous emphysema, 10%; tracheocutaneous fistula, 20%; wound infection, 15%; and bleeding, 5.0%. Eighty percent of the patients were completely decannulated after a mean of 23.4 months of follow-up (range, 4 -55 months). Conclusions: Laryngeal split with anterior and posterior cartilage graft interposition as an isolated procedure or associated with a cricotracheal resection is a feasible and low-morbidity alternative for complex laryngotracheal stenosis treatment.
Resumo:
The purpose of this in vitro study was to evaluate alterations in the surface roughness and micromorphology of human enamel submitted to three prophylaxis methods. Sixty-nine caries-free molars with exposed labial surfaces were divided into three groups. Group I was treated with a rotary instrument set at a low speed, rubber clip and a mixture of water and pumice; group II with a rotary instrument set at a low speed, rubber cup and prophylaxis paste Herjos-F (Vigodent S/A Industria e Comercio, Rio de Janeiro, Brazil); and group III with sodium bicarbonate spray Profi II Ceramic (Dabi A dante Indtistrias Medico Odontologicas Ltda, Ribeirao Preto, Brazil). All procedures were performed by the same operator for 10 s, and samples were rinsed and stored in distilled water. Pre and post-treatment surface evaluation was completed using a surface profilometer (Perthometer S8P Marh, Perthen, Germany) in 54 samples. In addition, the other samples were coated with gold and examined in a scanning electron microscope (SEM). The results of this study were statistically analyzed with the paired t-test (Student), the Kruskal-Wallis test and the Dunn (5%) test. The sodium bicarbonate spray led to significantly rougher surfaces than the pumice paste. The use of prophylaxis paste showed no statistically significant difference when compared with the other methods. Based on SEM analysis, the sodium bicarbonate spray presented an irregular surface with granular material and erosions. Based on this study, it can be concluded that there was an increased enamel stuface roughness when teeth were treated with sodium bicarbonate spray when compared with teeth treated with pumice paste.
Resumo:
Purpose: This prospective clinical trial compared the retention rate and caries-preventive efficacy of two types of sealant modalities over a 3-year period. Materials and Methods: Using a split-mouth randomised design, 1280 sealants were randomly applied on sound permanent second molars of 320 young patients aged between 12 and 16 years. Half of the teeth (n = 640) were sealed with a resin-modified glass ionomer cement (RMGIC) (Vitremer (TM), 3M ESPE) and the other half (n = 640) with a conventional light-cure, resin-based fissure sealant (LCRB) (Fluoroshield (R), Dentsply Caulk). Teeth were evaluated at baseline, 6-, 12-, 18-, 24-, 30- and 36-month intervals with regard to retention and new caries development. Results: On the sealed occlusal surfaces after 3 years, 5.10% of RMGIC and 91.08% of LCRB sealants were totally intact and 6.37% of RMGIC and 7.65% of LCRB sealants were partially intact. New caries lesions were found in 20.06% of RMGIC sealed occlusal surfaces, compared to 8.91% for LCRB sealants. Conclusions: The findings of the present clinical study suggest that RMGIC should be used only as a transitional sealant that can be applied to newly erupting teeth throughout the eruptive process, whereas LCRB sealants are used to successfully prevent occlusal caries lesions once an effective rubber dam can be achieved. It can be concluded that there are differences between the RMGIC and LCRB sealants over a 3-year period in terms of the retention rate and caries-preventive efficacy. RMGIC can serve as a simple and economic sealing solution, however provisional. Due to its poor retention rate, periodic recalls are necessary, even after 6 months, to eventually replace the lost sealant.
Resumo:
n plant breeding programs that aim to obtain cultivars with nitrogen (N) use efficiency, the focus is on methods of selection and experimental procedures that present low cost, fast response, high repeatability, and can be applied to a large number of cultivars. Thus, the objectives of this study were to classify maize cultivars regarding their use efficiency and response to N in a breeding program, and to validate the methodology with contrasting doses of the nutrient. The experimental design was a randomized block with the treatments arranged in a split-plot scheme with three replicates and five N doses (0, 30, 60, 120 and 200 kg ha-1) in the plots, and six cultivars in subplots. We compared a method examining the efficiency and response (ER) with two contrasting doses of N. After that, the analysis of variance, mean comparison and regression analysis were performed. In conclusion, the method of the use efficiency and response based on two N levels classifies the cultivars in the same way as the regression analysis, and it is appropriate in plant breeding routine. Thus, it is necessary to identify the levels of N required to discriminate maize cultivars in conditions of low and high N availability in plant breeding programs that aim to obtain efficient and responsive cultivars. Moreover, the analysis of the interaction genotype x environment at experiments with contrasting doses is always required, even when the interaction is not significant.
Resumo:
ABSTRACT Soils of tropical regions are more weathered and in need of conservation managements to maintain and improve the quality of its components. The objective of this study was to evaluate the availability of K, the organic matter content and the stock of total carbon of an Argisol after vinasse application and manual and mechanized harvesting of burnt and raw sugarcane, in western São Paulo.The data collection was done in the 2012/2013 harvest, in a bioenergy company in Presidente Prudente/SP. The research was arranged out following a split-plot scheme in a 5x5 factorial design, characterized by four management systems: without vinasse application and harvest without burning; with vinasse application and harvest without burning; with vinasse application and harvest after burning; without vinasse application and harvest after burning; plus native forest, and five soil sampling depths (0-10 10-20, 20-30, 30-40, 40-50 cm), with four replications. In each treatment, the K content in the soil and accumulated in the remaining dry biomass in the area, the levels of organic matter, organic carbon and soil carbon stock were determined. The mean values were compared by Tukey test. The vinasse application associated with the harvest without burning increased the K content in soil layers up to 40 cm deep. The managements without vinasse application and manual harvest after burning, and without vinasse application with mechanical harvesting without burning did not increase the levels of organic matter, organic carbon and stock of total soil organic carbon, while the vinasse application and harvest after burning and without burning increased the levels of these attributes in the depth of 0-10 cm.
Resumo:
Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tool must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case based on California Independent System Operator (CAISO) data concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Resumo:
One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.
Resumo:
Three different treatments were applied on several specimens of dolomitic and calcitic marble, properly stained with rust to mimic real situations (the stone specimens were exposed to the natural environment for about six months in contact with rusted iron). Thirty six marble specimens, eighteen calcitic and eighteen dolomitic, were characterized before and after treatment and monitored throughout the cleaning tests. The specimens were characterized by SEM-EDS (Scanning Electron Microscopy coupled with Energy Dispersion System), XRD (XRay Diffraction), XRF (X-Ray Fluorescence), FTIR (Fourier Transform Infrared Spectroscopy) and color measurements. It was also made a microscopic and macroscopic analysis of the stone surface along with the tests of short and long term capillary absorption. A series of test trials were conducted in order to understand which concentrations and contact times best suits to this purpose, to confirm what had been written to date in the literature. We sought to develop new methods of treatment application, skipping the usual methods of applying chemical treatments on stone substrates, with the use of cellulose poultice, resorting to the agar, a gel already used in many other areas, being something new in this area, which possesses great applicability in the field of conservation of stone materials. After the application of the best methodology for cleaning, specimens were characterized again in order to understand which treatment was more effective and less harmful, both for the operator and the stone material. Very briefly conclusions were that for a very intense and deep penetration into the stone, a solution of 3.5% of SDT buffered with ammonium carbonate to pH around 7 applied with agar support would be indicated. For rust stains in its initial state, the use of Ammonium citrate at a concentration of 5% buffered with ammonium to pH 7 could be applied more than once until satisfactory results appear.
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Resumo:
Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.
Resumo:
The pseudo-spectral time-domain (PSTD) method is an alternative time-marching method to classicalleapfrog finite difference schemes in the simulation of wave-like propagating phenomena. It is basedon the fundamentals of the Fourier transform to compute the spatial derivatives of hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acoustics simulations. However, one of the first issues to be solved consists on modeling wallabsorption. Unfortunately, there are no references in the technical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals to overcome this problem are presented, validated and compared to analytical solutions in different scenarios.