877 resultados para Splitting techniques
Resumo:
Splitting techniques are commonly used when large-scale models, which appear in different fields of science and engineering, are treated numerically. Four types of splitting procedures are defined and discussed. The problem of the choice of a splitting procedure is investigated. Several numerical tests, by which the influence of the splitting errors on the accuracy of the results is studied, are given. It is shown that the splitting errors decrease linearly when (1) the splitting procedure is of first order and (2) the splitting errors are dominant. Three examples for splitting procedures used in all large-scale air pollution models are presented. Numerical results obtained by a particular air pollution model, Unified Danish Eulerian Model (UNI-DEM), are given and analysed.
Resumo:
PURPOSE Extended grafting procedures in atrophic ridges are invasive and time-consuming and increase cost and patient morbidity. Therefore, ridge-splitting techniques have been suggested to enlarge alveolar crests. The aim of this cohort study was to report techniques and radiographic outcomes of implants placed simultaneously with a piezoelectric alveolar ridge-splitting technique (RST). Peri-implant bone-level changes (ΔIBL) of implants placed with (study group, SG) or without RST (control group, CG) were compared. MATERIALS AND METHODS Two cohorts (seven patients in each) were matched regarding implant type, position, and number; superstructure type; age; and gender and received 17 implants each. Crestal implant bone level (IBL) was measured at surgery (T0), loading (T1), and 1 year (T2) and 2 years after loading (T3). For all implants, ΔIBL values were determined from radiographs. Differences in ΔIBL between SG and CG were analyzed statistically (Mann-Whitney U test). Bone width was assessed intraoperatively, and vertical bone mapping was performed at T0, T1, and T3. RESULTS After a mean observation period of 27.4 months after surgery, the implant survival rate was 100%. Mean ΔIBL was -1.68 ± 0.90 mm for SG and -1.04 ± 0.78 mm for CG (P = .022). Increased ΔIBL in SG versus CG occurred mainly until T2. Between T2 and T3, ΔIBL was limited (-0.11 ± 1.20 mm for SG and -0.05 ± 0.16 mm for CG; P = .546). Median bone width increased intraoperatively by 4.7 mm. CONCLUSIONS Within the limitations of this study, it can be suggested that RST is a well-functioning one-stage alternative to extended grafting procedures if the ridge shows adequate height. ΔIBL values indicated that implants with RST may fulfill accepted implant success criteria. However, during healing and the first year of loading, increased IBL alterations must be anticipated.
Resumo:
In this paper we discuss implicit methods based on stiffly accurate Runge-Kutta methods and splitting techniques for solving Stratonovich stochastic differential equations (SDEs). Two splitting techniques: the balanced splitting technique and the deterministic splitting technique, are used in this paper. We construct a two-stage implicit Runge-Kutta method with strong order 1.0 which is corrected twice and no update is needed. The stability properties and numerical results show that this approach is suitable for solving stiff SDEs. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper proposes a method for the automatic extraction of building roof contours from a LiDAR-derived digital surface model (DSM). The method is based on two steps. First, to detect aboveground objects (buildings, trees, etc.), the DSM is segmented through a recursive splitting technique followed by a region merging process. Vectorization and polygonization are used to obtain polyline representations of the detected aboveground objects. Second, building roof contours are identified from among the aboveground objects by optimizing a Markov-random-field-based energy function that embodies roof contour attributes and spatial constraints. Preliminary results have shown that the proposed methodology works properly.
Resumo:
A implementação convencional do método de migração por diferenças finitas 3D, usa a técnica de splitting inline e crossline para melhorar a eficiência computacional deste algoritmo. Esta abordagem torna o algoritmo eficiente computacionalmente, porém cria anisotropia numérica. Esta anisotropia numérica por sua vez, pode levar a falsos posicionamentos de refletores inclinados, especialmente refletores com grandes ângulos de mergulho. Neste trabalho, como objetivo de evitar o surgimento da anisotropia numérica, implementamos o operador de extrapolação do campo de onda para baixo sem usar a técnica splitting inline e crossline no domínio frequência-espaço via método de diferenças finitas implícito, usando a aproximação de Padé complexa. Comparamos a performance do algoritmo iterativo Bi-gradiente conjugado estabilizado (Bi-CGSTAB) com o multifrontal massively parallel solver (MUMPS) para resolver o sistema linear oriundo do método de migração por diferenças finitas. Verifica-se que usando a expansão de Padé complexa ao invés da expansão de Padé real, o algoritmo iterativo Bi-CGSTAB fica mais eficientes computacionalmente, ou seja, a expansão de Padé complexa atua como um precondicionador para este algoritmo iterativo. Como consequência, o algoritmo iterativo Bi-CGSTAB é bem mais eficiente computacionalmente que o MUMPS para resolver o sistema linear quando usado apenas um termo da expansão de Padé complexa. Para aproximações de grandes ângulos, métodos diretos são necessários. Para validar e avaliar as propriedades desses algoritmos de migração, usamos o modelo de sal SEG/EAGE para calcular a sua resposta ao impulso.
Resumo:
Implementações dos métodos de migração diferença finita e Fourier (FFD) usam fatoração direcional para acelerar a performance e economizar custo computacional. Entretanto essa técnica introduz anisotropia numérica que podem erroneamente posicionar os refletores em mergulho ao longo das direções em que o não foi aplicado a fatoração no operador de migração. Implementamos a migração FFD 3D, sem usar a técnica do fatoração direcional, no domínio da frequência usando aproximação de Padé complexa. Essa aproximação elimina a anisotropia numérica ao preço de maior custo computacional buscando a solução do campo de onda para um sistema linear de banda larga. Experimentos numéricos, tanto no modelo homogêneo e heterogêneo, mostram que a técnica da fatoração direcional produz notáveis erros de posicionamento dos refletores em meios com forte variação lateral de velocidade. Comparamos a performance de resolução do algoritmo de FFD usando o método iterativo gradiente biconjugado estabilizado (BICGSTAB) e o multifrontal massively parallel direct solver (MUMPS). Mostrando que a aproximação de Padé complexa é um eficiente precondicionador para o BICGSTAB, reduzindo o número de iterações em relação a aproximação de Padé real. O método iterativo BICGSTAB é mais eficiente que o método direto MUMPS, quando usamos apenas um termo da expansão de Padé complexa. Para maior ângulo de abertura do operador, mais termos da série são requeridos no operador de migração, e neste caso, a performance do método direto é mais eficiente. A validação do algoritmo e as propriedades da evolução computacional foram avaliadas para a resposta ao impulso do modelo de sal SEG/EAGE.
Resumo:
Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^
Resumo:
A new integration scheme is developed for nonequilibrium molecular dynamics simulations where the temperature is constrained by a Gaussian thermostat. The utility of the scheme is demonstrated by its application to the SLLOD algorithm which is the standard nonequilibrium molecular dynamics algorithm for studying shear flow. Unlike conventional integrators, the new integrators are constructed using operator-splitting techniques to ensure stability and that little or no drift in the kinetic energy occurs. Moreover, they require minimum computer memory and are straightforward to program. Numerical experiments show that the efficiency and stability of the new integrators compare favorably with conventional integrators such as the Runge-Kutta and Gear predictor-corrector methods. (C) 1999 American Institute of Physics. [S0021-9606(99)50125-6].
Resumo:
The McMillan map is a one-parameter family of integrable symplectic maps of the plane, for which the origin is a hyperbolic fixed point with a homoclinic loop, with small Lyapunov exponent when the parameter is small. We consider a perturbation of the McMillan map for which we show that the loop breaks in two invariant curves which are exponentially close one to the other and which intersect transversely along two primary homoclinic orbits. We compute the asymptotic expansion of several quantities related to the splitting, namely the Lazutkin invariant and the area of the lobe between two consecutive primary homoclinic points. Complex matching techniques are in the core of this work. The coefficients involved in the expansion have a resurgent origin, as shown in [MSS08].
Resumo:
Metabolic labeling techniques have recently become popular tools for the quantitative profiling of proteomes. Classical stable isotope labeling with amino acids in cell cultures (SILAC) uses pairs of heavy/light isotopic forms of amino acids to introduce predictable mass differences in protein samples to be compared. After proteolysis, pairs of cognate precursor peptides can be correlated, and their intensities can be used for mass spectrometry-based relative protein quantification. We present an alternative SILAC approach by which two cell cultures are grown in media containing isobaric forms of amino acids, labeled either with 13C on the carbonyl (C-1) carbon or 15N on backbone nitrogen. Labeled peptides from both samples have the same nominal mass and nearly identical MS/MS spectra but generate upon fragmentation distinct immonium ions separated by 1 amu. When labeled protein samples are mixed, the intensities of these immonium ions can be used for the relative quantification of the parent proteins. We validated the labeling of cellular proteins with valine, isoleucine, and leucine with coverage of 97% of all tryptic peptides. We improved the sensitivity for the detection of the quantification ions on a pulsing instrument by using a specific fast scan event. The analysis of a protein mixture with a known heavy/light ratio showed reliable quantification. Finally the application of the technique to the analysis of two melanoma cell lines yielded quantitative data consistent with those obtained by a classical two-dimensional DIGE analysis of the same samples. Our method combines the features of the SILAC technique with the advantages of isobaric labeling schemes like iTRAQ. We discuss advantages and disadvantages of isobaric SILAC with immonium ion splitting as well as possible ways to improve it
Resumo:
The use of intensity-modulated radiotherapy (IMRT) has increased extensively in the modern radiotherapy (RT) treatments over the past two decades. Radiation dose distributions can be delivered with higher conformality with IMRT when compared to the conventional 3D-conformal radiotherapy (3D-CRT). Higher conformality and target coverage increases the probability of tumour control and decreases the normal tissue complications. The primary goal of this work is to improve and evaluate the accuracy, efficiency and delivery techniques of RT treatments by using IMRT. This study evaluated the dosimetric limitations and possibilities of IMRT in small (treatments of head-and-neck, prostate and lung cancer) and large volumes (primitive neuroectodermal tumours). The dose coverage of target volumes and the sparing of critical organs were increased with IMRT when compared to 3D-CRT. The developed split field IMRT technique was found to be safe and accurate method in craniospinal irradiations. By using IMRT in simultaneous integrated boosting of biologically defined target volumes of localized prostate cancer high doses were achievable with only small increase in the treatment complexity. Biological plan optimization increased the probability of uncomplicated control on average by 28% when compared to standard IMRT delivery. Unfortunately IMRT carries also some drawbacks. In IMRT the beam modulation is realized by splitting a large radiation field to small apertures. The smaller the beam apertures are the larger the rebuild-up and rebuild-down effects are at the tissue interfaces. The limitations to use IMRT with small apertures in the treatments of small lung tumours were investigated with dosimetric film measurements. The results confirmed that the peripheral doses of the small lung tumours were decreased as the effective field size was decreased. The studied calculation algorithms were not able to model the dose deficiency of the tumours accurately. The use of small sliding window apertures of 2 mm and 4 mm decreased the tumour peripheral dose by 6% when compared to 3D-CRT treatment plan. A direct aperture based optimization (DABO) technique was examined as a solution to decrease the treatment complexity. The DABO IMRT technique was able to achieve treatment plans equivalent with the conventional IMRT fluence based optimization techniques in the concave head-and-neck target volumes. With DABO the effective field sizes were increased and the number of MUs was reduced with a factor of two. The optimality of a treatment plan and the therapeutic ratio can be further enhanced by using dose painting based on regional radiosensitivities imaged with functional imaging methods.
Resumo:
Considering the ecological importance of stingless bees as caretakers and pollinators of a variety of native plants makes it necessary to improve techniques which increase of colonies' number in order to preserve these species and the biodiversity associated with them. Thus, our aim was to develop a methodology of in vitro production of stingless bee queens by offering a large quantity of food to the larvae. Our methodology consisted of determining the amount of larval food needed for the development of the queens, collecting and storing the larval food, and feeding the food to the larvae in acrylic plates. We found that the total average amount of larval food in a worker bee cell of E varia is approximately 26.70 +/- 3.55 mu L. We observed that after the consumption of extra amounts of food (25, 30, 35 and 40 mu L) the larvae differentiate into queens (n = 98). Therefore, the average total volume of food needed for the differentiation of a young larva of F. varia queen is approximately 61.70 +/- 5.00 mu L. In other words; the larvae destined to become queens eat 2.31 times more food than the ones destined to become workers. We used the species Frieseomelitta varia as a model, however the methodology can be reproduced for all species of stingless bees whose mechanism of caste differentiation depends on the amount of food ingested by the larvae. Our results demonstrate the effectiveness of the in vitro technique developed herein, pointing to the possibility of its use as a tool to assist the production of queens on a large scale. This would allow for the artificial splitting of colonies and contribute to conservation efforts in native bees.
Resumo:
Development of a novel HCPV nonimaging concentrator with high concentration (>500x) and built-in spectrum splitting concept is presented. It uses the combination of a commercial concentration GaInP/GaInAs/Ge 3J cell and a concentration Back-Point-Contact (BPC) silicon cell for efficient spectral utilization, and external confinement techniques for recovering the 3J cell's reflection. The primary optical element (POE) is a flat Fresnel lens and the secondary optical element (SOE) is a free-form RXI-type concentrator with a band-pass filter embedded in it - Both the POE and SOE performing Köhler integration to produce light homogenization on the receiver. The band-pass filter transmits the IR photons in the 900-1200 nm band to the silicon cell. A design target of an "equivalent" cell efficiency ~46% is predicted using commercial 39% 3J and 26% Si cells. A projected CPV module efficiency of greater than 38% is achievable at a concentration level larger than 500X with a wide acceptance angle of ±1°. A first proof-of concept receiver prototype has been manufactured using a simpler optical architecture (with a lower concentration, ~100x and lower simulated added efficiency), and experimental measurements have shown up to 39.8% 4J receiver efficiency using a 3J cell with a peak efficiency of 36.9%.
Resumo:
This project investigates the utility of differential algebra (DA) techniques applied to the problem of orbital dynamics with initial uncertainties in the orbital determination of the involved bodies. The use of DA theory allows the splitting of a common Monte Carlo simulation in two parts: the generation of a Taylor map of the final states with regard to the perturbation in the initial coordinates, and the evaluation of the map for many points. A propagator is implemented exploiting DA techniques, and tested in the field of asteroid impact risk monitoring with the potentially hazardous 2011 AG5 and 2007 VK184 as test cases. Results show that the new method is able to simulate 2.5 million trajectories with a precision good enough for the impact probability to be accurately reproduced, while running much faster than a traditional Monte Carlo approach (in 1 and 2 days, respectively).