791 resultados para Earthquake prediction.
Resumo:
In the present work, a group contribution method is proposed for the estimation of viscosity of fatty compounds and biodiesel esters as a function of the temperature. The databank used for regression of the group contribution parameters (1070 values for 65 types of substances) included fatty compounds, such as fatty acids, methyl and ethyl esters and alcohols, tri- and diacylglycerols, and glycerol. The inclusion of new experimental data for fatty esters, a partial acylglycerol, and glycerol allowed for a further refinement in the performance of this methodology in comparison to a prior group contribution equation (Ceriani, R.; Goncalves, C. B.; Rabelo, J.; Caruso, M.; Cunha, A. C. C.; Cavaleri, F. W.; Batista, E. A. C.; Meirelles, A. J. A. Group contribution model for predicting viscosity of fatty compounds. J. Chem. Eng. Data 2007, 52, 965-972) for all classes of fatty compounds. Besides, the influence of small concentrations of partial acylglycerols, intermediate compounds in the transesterification reaction, in the viscosity of biodiesels was also investigated.
Resumo:
Intraplate earthquakes in stable continental areas have been explained basically by reactivation of pre-existing zones of weakness, stress concentration, or both. Zones of weakness are usually identified as sites of the last major orogeny, provinces of recent alkaline intrusions, or stretched crust in ancient rifts. However, it is difficult to identify specific zones of weakness and intraplate fault zones are not always easily correlated with known geological features. Although Northeastern Brazil is one of the most seismically active areas in the country (magnitudes 5 roughly every 5 yr), with hypocentral depths shallower than similar to 10 km and seismic zones as long as 30-40 km, no clear relationship with the known surface geology can be usually established with confidence, and a clear identification of zones of weakness has not yet been possible. Here we present the first clear case of seismic activity occurring as reactivation of an old structure in Brazil: the Pernambuco Lineament, a major Neoproterozoic shear zone. The 2004 earthquake swarm of Belo Jardim (magnitudes up to 3.1) and the recurrent activities in the nearby towns of Sao Caetano and Caruaru (magnitudes up to 4.0 and 3.8), show that the Pernambuco Lineament is a weak zone. A local seismic network showed that the Belo Jardim swarm of 2004 November occurred by normal faulting on a North dipping, E-W oriented fault plane in close agreement with the E-W trending structures within the Pernambuco Lineament. The Belo Jardim activity was concentrated in a 1.5 km (E-W) by 2 km (downdip) fault area, and average depth of 4.5 km. The nearby Caruaru activity occurs as both strike-slip and normal faulting, also consistent with local structures of the Pernambuco Lineament. The focal mechanisms of Belo Jardim, Caruaru and S. Caetano, indicate E-W compressional and N-S extensional principal stresses. The NS extension of this stress field is larger than that predicted by numerical models such as those of Coblentz & Richardson and we propose that additional factors such as flexural stresses from the nearby Sergipe-Alagoas marginal basin could also affect the current stress field in the Pernambuco Lineament.
Resumo:
On December 9, 2007, a 4.9 m(b) earthquake occurred in the middle of the Sao Francisco Craton, in a region with no known previous activity larger than 4 m(b). This event reached intensity VII MM (Modified Mercalli) causing the first fatal victim in Brazil. The activity had started in May 25, 2007 with a 3.5 magnitude event and continued for several months, motivating the deployment of a local 6-station network. A three week seismic quiescence was observed before the mainshock. Initial absolute hypocenters were calculated with best fitting velocity models and then relative locations were determined with hypoDD. The aftershock distribution indicates a 3 km long rupture for the mainshock. The fault plane solution, based on P-wave polarities and hypocentral trend, indicates a reverse faulting mechanism on a N30 degrees E striking plane dipping about 40 degrees to the SE. The rupture depth extends from about 0.3 to 1.2 km only. Despite the shallow depth of the mainshock, no surface feature could be correlated with the fault plane. Aeromagnetic data in the epicentral area show short-wavelength lineaments trending NNE-SSW to NE-SW which we interpret as faults and fractures in the craton basement beneath the surface limestone layer. We propose that the Caraibas-Itacarambi seismicity is probably associated with reactivation of these basement fractures and faults under the present E-W compressional stress field in this region of the South American Plate. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.
Resumo:
This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Managing software maintenance is rarely a precise task due to uncertainties concerned with resources and services descriptions. Even when a well-established maintenance process is followed, the risk of delaying tasks remains if the new services are not precisely described or when resources change during process execution. Also, the delay of a task at an early process stage may represent a different delay at the end of the process, depending on complexity or services reliability requirements. This paper presents a knowledge-based representation (Bayesian Networks) for maintenance project delays based on specialists experience and a corresponding tool to help in managing software maintenance projects. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this article is to present a new method to predict the response variable of an observation in a new cluster for a multilevel logistic regression. The central idea is based on the empirical best estimator for the random effect. Two estimation methods for multilevel model are compared: penalized quasi-likelihood and Gauss-Hermite quadrature. The performance measures for the prediction of the probability for a new cluster observation of the multilevel logistic model in comparison with the usual logistic model are examined through simulations and an application.
Resumo:
The objective of this article is to find out the influence of the parameters of the ARIMA-GARCH models in the prediction of artificial neural networks (ANN) of the feed forward type, trained with the Levenberg-Marquardt algorithm, through Monte Carlo simulations. The paper presents a study of the relationship between ANN performance and ARIMA-GARCH model parameters, i.e. the fact that depending on the stationarity and other parameters of the time series, the ANN structure should be selected differently. Neural networks have been widely used to predict time series and their capacity for dealing with non-linearities is a normally outstanding advantage. However, the values of the parameters of the models of generalized autoregressive conditional heteroscedasticity have an influence on ANN prediction performance. The combination of the values of the GARCH parameters with the ARIMA autoregressive terms also implies in ANN performance variation. Combining the parameters of the ARIMA-GARCH models and changing the ANN`s topologies, we used the Theil inequality coefficient to measure the prediction of the feed forward ANN.
Resumo:
The purpose of this work is to verify the stability of the relationship between real activity and interest rate spread. The test is based on Chen (1988) and Osorio and Galea (2006). The analysis is applied to Chile and the United States, from 1980 to 1999. In general, in both cases the relationship was statistically significant in early 80s, but a break point is found in both countries during that decades, suggesting that the relationship depends on the monetary rule follow by the Central Bank.
Resumo:
In the present work, a new approach for the determination of the partition coefficient in different interfaces based on the density function theory is proposed. Our results for log P(ow) considering a n-octanol/water interface for a large super cell for acetone -0.30 (-0.24) and methane 0.95 (0.78) are comparable with the experimental data given in parenthesis. We believe that these differences are mainly related to the absence of van der Walls interactions and the limited number of molecules considered in the super cell. The numerical deviations are smaller than that observed for interpolation based tools. As the proposed model is parameter free, it is not limited to the n-octanol/water interface.
Resumo:
A correlation between the physicochemical properties of mono- [Li(I), K(I), Na(I)] and divalent [Cd(II), Cu(II), Mn(II), Ni(II), Co(II), Zn(II), Mg(II), Ca(II)] metal cations and their toxicity (evaluated by the free ion median effective concentration. EC50(F)) to the naturally bioluminescent fungus Gerronema viridilucens has been studied using the quantitative ion character activity relationship (QICAR) approach. Among the 11 ionic parameters used in the current study, a univariate model based on the covalent index (X(m)(2)r) proved to be the most adequate for prediction of fungal metal toxicity evaluated by the logarithm of free ion median effective concentration (log EC50(F)): log EC50(F) = 4.243 (+/-0.243) -1.268 (+/-0.125).X(m)(2)r (adj-R(2) = 0.9113, Alkaike information criterion [AIC] = 60.42). Additional two- and three-variable models were also tested and proved less suitable to fit the experimental data. These results indicate that covalent bonding is a good indicator of metal inherent toxicity to bioluminescent fungi. Furthermore, the toxicity of additional metal ions [Ag(I), Cs(I), Sr(II), Ba(II), Fe(II), Hg(II), and Pb(II)] to G. viridilucens was predicted, and Pb was found to be the most toxic metal to this bioluminescent fungus (EC50(F)): Pb(II) > Ag(I) > Hg(I) > Cd(II) > Cu(II) > Co(II) Ni(II) > Mn(II) > Fe(II) approximate to Zn(II) > Mg(II) approximate to Ba(II) approximate to Cs(I) > Li(I) > K(I) approximate to Na(I) approximate to Sr(II)> Ca(II). Environ. Toxicol. Chem. 2010;29:2177-2181. (C) 2010 SETAC
Resumo:
Flash points (T(FP)) of hydrocarbons are calculated from their flash point numbers, N(FP), with the relationship T(FP) (K) = 23.369N(FP)(2/3) + 20.010N(FP)(1/3) + 31.901 In turn, the N(FP) values can be predicted from experimental boiling point numbers (Y(BP)) and molecular structure with the equation N(FP) = 0.987 Y(BP) + 0.176D + 0.687T + 0.712B - 0.176 where D is the number of olefinic double bonds in the structure, T is the number of triple bonds, and B is the number of aromatic rings. For a data set consisting of 300 diverse hydrocarbons, the average absolute deviation between the literature and predicted flash points was 2.9 K.
Resumo:
A very high level of theoretical treatment (complete active space self-consistent field CASSCF/MRCI/aug-cc-pV5Z) was used to characterize the spectroscopic properties of a manifold of quartet and doublet states of the species BeP, as yet experimentally unknown. Potential energy curves for 11 electronic states were obtained, as well as the associated vibrational energy levels, and a whole set of spectroscopic constants. Dipole moment functions and vibrationally averaged dipole moments were also evaluated. Similarities and differences between BeN and BeP were analysed along with the isovalent SiB species. The molecule BeP has a X (4)Sigma(-) ground state, with an equilibrium bond distance of 2.073 angstrom, and a harmonic frequency of 516.2 cm(-1); it is followed closely by the states (2)Pi (R(e) = 2.081 angstrom, omega(e) = 639.6 cm(-1)) and (2)Sigma(-) (R(e) = 2.074 angstrom, omega(e) = 536.5 cm(-1)), at 502 and 1976 cm(-1), respectively. The other quartets investigated, A (4)Pi (R(e) = 1.991 angstrom, omega(e) = 555.3 cm(-1)) and B (4)Sigma(-) (R(e) = 2.758 angstrom, omega(e) = 292.2 cm(-1)) lie at 13 291 and 24 394 cm(-1), respectively. The remaining doublets ((2)Delta, (2)Sigma(+)(2) and (2)Pi(3)) all fall below 28 000 cm(-1). Avoided crossings between the (2)Sigma(+) states and between the (2)Pi states add an extra complexity to this manifold of states.
Resumo:
The main purpose of this thesis project is to prediction of symptom severity and cause in data from test battery of the Parkinson’s disease patient, which is based on data mining. The collection of the data is from test battery on a hand in computer. We use the Chi-Square method and check which variables are important and which are not important. Then we apply different data mining techniques on our normalize data and check which technique or method gives good results.The implementation of this thesis is in WEKA. We normalize our data and then apply different methods on this data. The methods which we used are Naïve Bayes, CART and KNN. We draw the Bland Altman and Spearman’s Correlation for checking the final results and prediction of data. The Bland Altman tells how the percentage of our confident level in this data is correct and Spearman’s Correlation tells us our relationship is strong. On the basis of results and analysis we see all three methods give nearly same results. But if we see our CART (J48 Decision Tree) it gives good result of under predicted and over predicted values that’s lies between -2 to +2. The correlation between the Actual and Predicted values is 0,794in CART. Cause gives the better percentage classification result then disability because it can use two classes.