937 resultados para Spectral method with domain decomposition
Resumo:
Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources-including clinical history.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
The study of decaying organisms and death assemblages is referred to as forensic taphonomy, or more simply the study of graves. This field is dominated by the fields of entomology, anthropology and archaeology. Forensic taphonomy also includes the study of the ecology and chemistry of the burial environment. Studies in forensic taphonomy often require the use of analogues for human cadavers or their component parts. These might include animal cadavers or skeletal muscle tissue. However, sufficient supplies of cadavers or analogues may require periodic freezing of test material prior to experimental inhumation in the soil. This study was carried out to ascertain the effect of freezing on skeletal muscle tissue prior to inhumation and decomposition in a soil environment under controlled laboratory conditions. Changes in soil chemistry were also measured. In order to test the impact of freezing, skeletal muscle tissue (Sus scrofa) was frozen (−20 °C) or refrigerated (4 °C). Portions of skeletal muscle tissue (∼1.5 g) were interred in microcosms (72 mm diameter × 120 mm height) containing sieved (2 mm) soil (sand) adjusted to 50% water holding capacity. The experiment had three treatments: control with no skeletal muscle tissue, microcosms containing frozen skeletal muscle tissue and those containing refrigerated tissue. The microcosms were destructively harvested at sequential periods of 2, 4, 6, 8, 12, 16, 23, 30 and 37 days after interment of skeletal muscle tissue. These harvests were replicated 6 times for each treatment. Microbial activity (carbon dioxide respiration) was monitored throughout the experiment. At harvest the skeletal muscle tissue was removed and the detritosphere soil was sampled for chemical analysis. Freezing was found to have no significant impact on decomposition or soil chemistry compared to unfrozen samples in the current study using skeletal muscle tissue. However, the interment of skeletal muscle tissue had a significant impact on the microbial activity (carbon dioxide respiration) and chemistry of the surrounding soil including: pH, electroconductivity, ammonium, nitrate, phosphate and potassium. This is the first laboratory controlled study to measure changes in inorganic chemistry in soil associated with the decomposition of skeletal muscle tissue in combination with microbial activity.
Resumo:
In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the `natural` staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method. which we assess through numerical experiments showing the significant gain that can be achieved. indeed. the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In this work, we report on the magnetic properties of nickel nanoparticles (NP) in a SiO(2)-C thin film matrix, prepared by a polymeric precursor method, with Ni content x in the 0-10 wt% range. Microstructural analyses of the films showed that the Ni NP are homogenously distributed in the SiO(2)-C matrix and have spherical shape with average diameter of similar to 10 nm. The magnetic properties reveal features of superparamagnetism with blocking temperatures T (B) similar to 10 K. The average diameter of the Ni NP, estimated from magnetization measurements, was found to be similar to 4 nm for the x = 3 wt% Ni sample, in excellent agreement with X-ray diffraction data. M versus H hysteresis loops indicated that the Ni NP are free from a surrounding oxide layer. We have also observed that coercivity (H (C)) develops appreciably below T (B), and follows the H (C) ae [1 - (T/T (B))(0.5)] relationship, a feature expected for randomly oriented and non-interacting nanoparticles. The extrapolation of H (C) to 0 K indicates that coercivity decreases with increasing x, suggesting that dipolar interactions may be relevant in films with x > 3 wt% Ni.
Resumo:
The aim of this study was to develop a fast capillary electrophoresis method for the determination of propranolol in pharmaceutical preparations. In the method development the pH and constituents of the background electrolyte were selected using the effective mobility versus pH curves. Benzylamine was used as the internal standard. The background electrolyte was composed of 60 mmol L(-1) tris(hydroxymethyl)aminomethane and 30 mmol L(-1) 2-hydroxyisobutyric acid,at pH 8.1. Separation was conducted in a fused-silica capillary (32 cm total length and 8.5 cm effective length, 50 mu m I.D.) with a short-end injection configuration and direct UV detection at 214 nm. The run time was only 14 s. Three different strategies were studied in order to develop a fast CE method with low total analysis time for propranolol analysis: low flush time (Lflush) 35 runs/h, without flush (Wflush) 52 runs/h, and Invert (switched polarity) 45 runs/h. Since the three strategies developed are statistically equivalent, Mush was selected due to the higher analytical frequency in comparison with the other methods. A few figures of merit of the proposed method include: good linearity (R(2) > 0.9999); limit of detection of 0.5 mg L(-1): inter-day precision better than 1.03% (n = 9) and recovery in the range of 95.1-104.5%. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Carboxylic acid groups in PAH/PAA-based multilayers bind silver cations by ion exchange with the acid protons. The aggregation and spatial distribution of the nanoparticles proved to be dependent oil the process used to reduce the silver acetate aqueous solution. The reducing method with ambient light formed larger nanoparticles with diameters ranging from 4-50 nm in comparison with the reduction method using UV light, which gave particles with diameters of 2-4 nm The high toughness of samples reduced by ambient light is a result of two population distributions of particle sizes caused by different mechanisms when compared with the UV light process. According to these phenomena, a judicious choice of the spectral source call be used as a way to control the type and size of silver nanoparticles formed on PEMs. Depending on the energy of the light source, the Ag nanoparticles present cubic and/or hexagonal crystallographic structures, as confirmed by XRD. Beyond the kinetically controlled process of UV photoinduced cluster formation, the annealing produced by UV light allowed a second mechanism to modify the growth rates, spatial distribution, and phases.
Resumo:
Trabalho apresentado no XXXV CNMAC, Natal-RN, 2014.
Resumo:
Trabalho apresentado no Congresso Nacional de Matemática Aplicada à Indústria, 18 a 21 de novembro de 2014, Caldas Novas - Goiás
Resumo:
A RP-HPLC method with photodiode array detection (DAD) was developed to separate, identify and quantify simultaneously the most representative phenolic compounds present in Madeira and Canary Islands wines. The optimized chromatographic method was carefully validated in terms of linearity, precision, accuracy and sensitivity. A high repeatability and a good stability of phenolics retention times (a3%) were obtained, as well as relative peak area. Also high recoveries were achieved, over 80.3%. Polyphenols calibration curves showed a good linearity (r2 A0.994) within test ranges. Detection limits ranged between 0.03 and 11.5 lg/mL for the different polyphenols. A good repeatability was obtained, with intra-day variations less than 7.9%. The described method was successfully applied to quantify several polyphenols in 26 samples of different kinds of wine (red, ros and white wines) from Madeira and Canary Islands. Gallic acid was by far the most predominant acid. It represents more than 65% of all phenolics, followed by p-coumaric and caffeic acids. The major flavonoid found in Madeira wines was trans-resveratrol. In some wines, (–)-epicatechin was also found in highest amount. Canary wines were shown to be rich in gallic, caffeic and p-coumaric acids and quercetin.
Resumo:
The topology optimization problem characterize and determine the optimum distribution of material into the domain. In other words, after the definition of the boundary conditions in a pre-established domain, the problem is how to distribute the material to solve the minimization problem. The objective of this work is to propose a competitive formulation for optimum structural topologies determination in 3D problems and able to provide high-resolution layouts. The procedure combines the Galerkin Finite Elements Method with the optimization method, looking for the best material distribution along the fixed domain of project. The layout topology optimization method is based on the material approach, proposed by Bendsoe & Kikuchi (1988), and considers a homogenized constitutive equation that depends only on the relative density of the material. The finite element used for the approach is a four nodes tetrahedron with a selective integration scheme, which interpolate not only the components of the displacement field but also the relative density field. The proposed procedure consists in the solution of a sequence of layout optimization problems applied to compliance minimization problems and mass minimization problems under local stress constraint. The microstructure used in this procedure was the SIMP (Solid Isotropic Material with Penalty). The approach reduces considerably the computational cost, showing to be efficient and robust. The results provided a well defined structural layout, with a sharpness distribution of the material and a boundary condition definition. The layout quality was proporcional to the medium size of the element and a considerable reduction of the project variables was observed due to the tetrahedrycal element
Resumo:
The stretch zone width (SZW) data for 15-5PH steel CTOD specimens fractured at -150 degrees C to + 23 degrees C temperature were measured based on focused images and 3D maps obtained by extended depth-of-field reconstruction from light microscopy (LM) image stacks. This LM-based method, with a larger lateral resolution, seems to be as effective for quantitative analysis of SZW as scanning electron microscopy (SEM) or confocal scanning laser microscopy (CSLM), permitting to clearly identify stretch zone boundaries. Despite the worst sharpness of focused images, a robust linear correlation was established to fracture toughness (KC) and SZW data for the 15-5PH steel tested specimens, measured at their center region. The method is an alternative to evaluate the boundaries of stretched zones, at a lower cost of implementation and training, since topographic data from elevation maps can be associated with reconstructed image, which summarizes the original contrast and brightness information. Finally, the extended depth-of-field method is presented here as a valuable tool for failure analysis, as a cheaper alternative to investigate rough surfaces or fracture, compared to scanning electron or confocal light microscopes. Microsc. Res. Tech. 75:11551158, 2012. (C) 2012 Wiley Periodicals, Inc.
Resumo:
It was synthesized MnZn ferrite with general formulae Mn1-xZnxFe2O4 (mol%), 0,3 ≤ x ≤ 0,7 by using the citrate precursor method. The precursors decomposition was studied by thermogravimetric analysis (TGA), differential thermogravimetric analysis (DTG), differential thermal analysis (DTA) and Fourier transform infrared (FTIR) of powder calcined at 350ºC/3,5h. X-ray diffraction pattern (XRD) of samples was done from 350 to 1200ºC/2h using various atmospheres. The power calcined at 350ºC/3,5h formed spinel phase. It is necessary atmosphere control to avoid secondary phase such as hematite. From 900 to 1200ºC was obtained 90,66 and 100% of MnZn spinel ferrite phase, respectively. Analysis by dispersive energy scanning (EDS) at 350ºC shows high Mn and Zn dispersion, indicating that the diffusion process was homogeneous. Semi-quantitative analysis by EDS verified that despite the atmosphere control during calcinations at high temperatures (< 800ºC) occurred ZnO evaporation causing stoichiometric deviation. Vibrating sample magnetometer (VSM) measures show soft ferrite material characteristics with Hc from 6,5 x 10-3 to 11,1 x 10-2 T. Saturation magnetization (Ms) and initial permeability (µi) of MnZn spinel phase obtained, respectively, from 14,3 to 83,8 Am2/kg and 14,1 to 62,7 (Am2/kg)T
Resumo:
The software systems development with domain-specific languages has become increasingly common. Domain-specific languages (DSLs) provide increased of the domain expressiveness, raising the abstraction level by facilitating the generation of models or low-level source code, thus increasing the productivity of systems development. Consequently, methods for the development of software product lines and software system families have also proposed the adoption of domain-specific languages. Recent studies have investigated the limitations of feature model expressiveness and proposing the use of DSLs as a complement or substitute for feature model. However, in complex projects, a single DSL is often insufficient to represent the different views and perspectives of development, being necessary to work with multiple DSLs. In order to address new challenges in this context, such as the management of consistency between DSLs, and the need to methods and tools that support the development with multiple DSLs, over the past years, several approaches have been proposed for the development of generative approaches. However, none of them considers matters relating to the composition of DSLs. Thus, with the aim to address this problem, the main objectives of this dissertation are: (i) to investigate the adoption of the integrated use of feature models and DSLs during the domain and application engineering of the development of generative approaches; (ii) to propose a method for the development of generative approaches with composition DSLs; and (iii) to investigate and evaluate the usage of modern technology based on models driven engineering to implement strategies of integration between feature models and composition of DSLs