98 resultados para Fast Algorithm
Resumo:
This study was aimed at assessing the psychometric qualities of the fast alcohol screening test (FAST), and at comparing these qualities to those of the alcohol use disorders identification test (AUDIT) in three samples of Brazilian adults: (i) subjects attended at an emergency department (530); (ii) patients from a psychosocial care center (40); and (iii) university students (429). The structured clinical interview for diagnosis (SCID)-IV was used as gold standard. The FAST demonstrated high test-retest and interrater reliability coefficients, as well as high predictive and concurrent validity values. The results attest the validity and reliability of the Brazilian version of the FAST for the screening of indicators of alcohol abuse and dependence.
Resumo:
Our numerical simulations show that the reconnection of magnetic field becomes fast in the presence of weak turbulence in the way consistent with the Lazarian and Vishniac (1999) model of fast reconnection. We trace particles within our numerical simulations and show that the particles can be efficiently accelerated via the first order Fermi acceleration. We discuss the acceleration arising from reconnection as a possible origin of the anomalous cosmic rays measured by Voyagers. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The first stars that formed after the Big Bang were probably massive(1), and they provided the Universe with the first elements heavier than helium (`metals`), which were incorporated into low-mass stars that have survived to the present(2,3). Eight stars in the oldest globular cluster in the Galaxy, NGC 6522, were found to have surface abundances consistent with the gas from which they formed being enriched by massive stars(4) (that is, with higher alpha-element/Fe and Eu/Fe ratios than those of the Sun). However, the same stars have anomalously high abundances of Ba and La with respect to Fe(4), which usually arises through nucleosynthesis in low-mass stars(5) (via the slow-neutron-capture process, or s-process). Recent theory suggests that metal-poor fast-rotating massive stars are able to boost the s-process yields by up to four orders of magnitude(6), which might provide a solution to this contradiction. Here we report a reanalysis of the earlier spectra, which reveals that Y and Sr are also over-abundant with respect to Fe, showing a large scatter similar to that observed in extremely metal-poor stars(7), whereas C abundances are not enhanced. This pattern is best explained as originating in metal-poor fast-rotating massive stars, which might point to a common property of the first stellar generations and even of the `first stars`.
The SARS algorithm: detrending CoRoT light curves with Sysrem using simultaneous external parameters
Resumo:
Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Resumo:
The end of the Neoproterozoic era is punctuated by two global glacial events marked by the presence of glacial deposits overlaid by cap carbonates. Duration of glacial intervals is now consistently constrained to 3-12 million years but the duration of the post-glacial transition is more controversial due to the uncertainty in cap dolostone sedimentation rates. Indeed, the presence of several stratabound magnetic reversals in Brazilian cap dolostones recently questioned the short sedimentation duration (a few thousand years at most) that was initially suggested for these rocks. Here, we present new detailed magnetostratigraphic data of the Mirassol d`Oeste cap dolostones (Mato Grosso, Brazil) and ""bomb-spike"" calibrated AMS (14)C data of microbial mats from the Lagoa Vermelha (Rio de Janeiro, Brazil). We also compile sedimentary, isotopic and microbiological data from post-Marinoan outcrops and/or recent depositional analogues in order to discuss the deposition rate of Marinoan cap dolostones and to infer an estimation of the deglaciation duration in the snowball Earth aftermath. Taken together, the various data point to a sedimentation duration in the range of a few 10(5) years. (C) 2010 Elsevier B.V. All rights reserved.
Genetic algorithm inversion of the average 1D crustal structure using local and regional earthquakes
Resumo:
Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
Let a > 0, Omega subset of R(N) be a bounded smooth domain and - A denotes the Laplace operator with Dirichlet boundary condition in L(2)(Omega). We study the damped wave problem {u(tt) + au(t) + Au - f(u), t > 0, u(0) = u(0) is an element of H(0)(1)(Omega), u(t)(0) = v(0) is an element of L(2)(Omega), where f : R -> R is a continuously differentiable function satisfying the growth condition vertical bar f(s) - f (t)vertical bar <= C vertical bar s - t vertical bar(1 + vertical bar s vertical bar(rho-1) + vertical bar t vertical bar(rho-1)), 1 < rho < (N - 2)/(N + 2), (N >= 3), and the dissipativeness condition limsup(vertical bar s vertical bar ->infinity) s/f(s) < lambda(1) with lambda(1) being the first eigenvalue of A. We construct the global weak solutions of this problem as the limits as eta -> 0(+) of the solutions of wave equations involving the strong damping term 2 eta A(1/2)u with eta > 0. We define a subclass LS subset of C ([0, infinity), L(2)(Omega) x H(-1)(Omega)) boolean AND L(infinity)([0, infinity), H(0)(1)(Omega) x L(2)(Omega)) of the `limit` solutions such that through each initial condition from H(0)(1)(Omega) x L(2)(Omega) passes at least one solution of the class LS. We show that the class LS has bounded dissipativeness property in H(0)(1)(Omega) x L(2)(Omega) and we construct a closed bounded invariant subset A of H(0)(1)(Omega) x L(2)(Omega), which is weakly compact in H(0)(1)(Omega) x L(2)(Omega) and compact in H({I})(s)(Omega) x H(s-1)(Omega), s is an element of [0, 1). Furthermore A attracts bounded subsets of H(0)(1)(Omega) x L(2)(Omega) in H({I})(s)(Omega) x H(s-1)(Omega), for each s is an element of [0, 1). For N = 3, 4, 5 we also prove a local uniqueness result for the case of smooth initial data.
Resumo:
In this paper we present a genetic algorithm with new components to tackle capacitated lot sizing and scheduling problems with sequence dependent setups that appear in a wide range of industries, from soft drink bottling to food manufacturing. Finding a feasible solution to highly constrained problems is often a very difficult task. Various strategies have been applied to deal with infeasible solutions throughout the search. We propose a new scheme of classifying individuals based on nested domains to determine the solutions according to the level of infeasibility, which in our case represents bands of additional production hours (overtime). Within each band, individuals are just differentiated by their fitness function. As iterations are conducted, the widths of the bands are dynamically adjusted to improve the convergence of the individuals into the feasible domain. The numerical experiments on highly capacitated instances show the effectiveness of this computational tractable approach to guide the search toward the feasible domain. Our approach outperforms other state-of-the-art approaches and commercial solvers. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A numerical algorithm for fully dynamical lubrication problems based on the Elrod-Adams formulation of the Reynolds equation with mass-conserving boundary conditions is described. A simple but effective relaxation scheme is used to update the solution maintaining the complementarity conditions on the variables that represent the pressure and fluid fraction. The equations of motion are discretized in time using Newmark`s scheme, and the dynamical variables are updated within the same relaxation process just mentioned. The good behavior of the proposed algorithm is illustrated in two examples: an oscillatory squeeze flow (for which the exact solution is available) and a dynamically loaded journal bearing. This article is accompanied by the ready-to-compile source code with the implementation of the proposed algorithm. [DOI: 10.1115/1.3142903]
Resumo:
The amount of textual information digitally stored is growing every day. However, our capability of processing and analyzing that information is not growing at the same pace. To overcome this limitation, it is important to develop semiautomatic processes to extract relevant knowledge from textual information, such as the text mining process. One of the main and most expensive stages of the text mining process is the text pre-processing stage, where the unstructured text should be transformed to structured format such as an attribute-value table. The stemming process, i.e. linguistics normalization, is usually used to find the attributes of this table. However, the stemming process is strongly dependent on the language in which the original textual information is given. Furthermore, for most languages, the stemming algorithms proposed in the literature are computationally expensive. In this work, several improvements of the well know Porter stemming algorithm for the Portuguese language, which explore the characteristics of this language, are proposed. Experimental results show that the proposed algorithm executes in far less time without affecting the quality of the generated stems.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.