940 resultados para Banco on-line
Resumo:
O Internet banking tem evoluído rapidamente nos últimos anos. Se em 1996 apenas alguns pioneiros se arriscavam a oferecer serviços bancários através da Web, hoje milhões de usuários no mundo todo transformaram o Internet banking numa das aplicações de maior sucesso no universo do comércio eletrônico. Entretanto, este crescimento do Internet banking não está ocorrendo de forma homogênea no mercado bancário. Dois dos fatores influenciam o grau de investimento de um banco nos serviços on-line pela Web são: porte do banco e orientação de mercado varejo ou atacado dos serviços oferecidos. Este estudo mostra que os bancos de maior porte e os serviços voltados para o varejo estão mais consolidados no uso do Internet banking, embora os bancos de menor porte e os serviços para o mercado de atacado já disponham de importantes iniciativas de uso da Internet como canal bancário.
Resumo:
Racionaliza????o e moderniza????o do sistema de normas do Banco do Nordeste, mediante a consolida????o e sistematiza????o de normas esparsas, constantes em diversos manuais, circulares e avisos-circulares, cujos conte??dos se sobrepunham e at?? se contradiziam, implementando solu????es informatizadas n??o s?? para o processo de elabora????o e divulga????o, como tamb??m para as consultas por parte dos usu??rios, resultando numa redu????o de cerca de 3.500 folhas de normas para cerca de 600, ou seja, de cerca de 612.500 folhas nas 175 ag??ncias para cerca de 105.000. Institui????o de mecanismos r??pidos e on line de consultas ??s normas, e de formula????o de consultas com respostas no mesmo dia e, na maioria das vezes, na mesma hora. Implementa????o de ferramentas para a compila????o de respostas ??s consultas mais freq??entes, ensejando a redu????o permanente das consultas, por serem dadas maiores e melhores condi????es de apreens??o dos conte??dos normativos pelos usu??rios. Redu????o de custos com pessoal (redu????o das equipes de normas de 39 pessoas para uma s?? equipe de 5 pessoas) e material (redu????o com custos de impress??o, tinta e papel), assim como com comunica????es telef??nicas
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.
Resumo:
Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) plays an important role in the life cycle of the Trypanosoma cruzi, and an immobilized enzyme reactor (IMER) has been developed for use in the on-line screening for GAPDH inhibitors. An IMER containing human GAPDH has been previously reported; however, these conditions produced a T. cruzi GAPDH-IMER with poor activity and stability. The factors affecting the stability of the human and T. cruzi GAPDHs in the immobilization process and the influence of pH and buffer type on the stability and activity of the IMERs have been investigated. The resulting T. cruzi GAPDH-IMER was coupled to an analytical octyl column, which was used to achieve chromatographic separation of NAD+ from NADH. The production of NADH stimulated by D-glyceraldehyde-3-phosphate was used to investigate the activity and kinetic parameters of the immobilized T. cruzi GAPDH. The Michaelis-Menten constant (K-m) values determined for D-glyceraldehyde-3-phosphate and NAD(+) were K-m = 0.5 +/- 0.05 mM and 0.648 +/- 0.08 mM, respectively, which were consistent with the values obtained using the non-immobilized enzyme.
Resumo:
We introduce the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS). CATT-BRAMS is an on-line transport model fully consistent with the simulated atmospheric dynamics. Emission sources from biomass burning and urban-industrial-vehicular activities for trace gases and from biomass burning aerosol particles are obtained from several published datasets and remote sensing information. The tracer and aerosol mass concentration prognostics include the effects of sub-grid scale turbulence in the planetary boundary layer, convective transport by shallow and deep moist convection, wet and dry deposition, and plume rise associated with vegetation fires in addition to the grid scale transport. The radiation parameterization takes into account the interaction between the simulated biomass burning aerosol particles and short and long wave radiation. The atmospheric model BRAMS is based on the Regional Atmospheric Modeling System (RAMS), with several improvements associated with cumulus convection representation, soil moisture initialization and surface scheme tuned for the tropics, among others. In this paper the CATT-BRAMS model is used to simulate carbon monoxide and particulate material (PM(2.5)) surface fluxes and atmospheric transport during the 2002 LBA field campaigns, conducted during the transition from the dry to wet season in the southwest Amazon Basin. Model evaluation is addressed with comparisons between model results and near surface, radiosondes and airborne measurements performed during the field campaign, as well as remote sensing derived products. We show the matching of emissions strengths to observed carbon monoxide in the LBA campaign. A relatively good comparison to the MOPITT data, in spite of the fact that MOPITT a priori assumptions imply several difficulties, is also obtained.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).
Resumo:
This work deals with neural network (NN)-based gait pattern adaptation algorithms for an active lower-limb orthosis. Stable trajectories with different walking speeds are generated during an optimization process considering the zero-moment point (ZMP) criterion and the inverse dynamic of the orthosis-patient model. Additionally, a set of NNs is used to decrease the time-consuming analytical computation of the model and ZMP. The first NN approximates the inverse dynamics including the ZMP computation, while the second NN works in the optimization procedure, giving an adapted desired trajectory according to orthosis-patient interaction. This trajectory adaptation is added directly to the trajectory generator, also reproduced by a set of NNs. With this strategy, it is possible to adapt the trajectory during the walking cycle in an on-line procedure, instead of changing the trajectory parameter after each step. The dynamic model of the actual exoskeleton, with interaction forces included, is used to generate simulation results. Also, an experimental test is performed with an active ankle-foot orthosis, where the dynamic variables of this joint are replaced in the simulator by actual values provided by the device. It is shown that the final adapted trajectory follows the patient intention of increasing the walking speed, so changing the gait pattern. (C) Koninklijke Brill NV, Leiden, 2011
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
To determine the effect of slurry rheology on industrial grinding performance, 45 surveys were conducted on 16 full-scale grinding mills in five sites. Four operating variables - mill throughput, slurry density, slurry viscosity and feed fines content-were investigated. The rheology of the mill discharge slurries was measured either on-line or off-line, and the data were processed using a standard procedure to obtain a full range of flow curves. Multi-linear regression was employed as a statistical analysis tool to determine whether or not rheological effects exert an influence on industrial grinding, and to assess the influence of the four mill operating conditions on mill performance in terms of the Grinding Index, a criterion describing the overall breakage of particles across the mill. The results show that slurry rheology does influence industrial grinding. The trends of these effects on Grinding Index depend upon the rheological nature of the slurry-whether the slurries are dilatant or pseudoplastic, and whether they exhibit a high or low yield stress. The interpretation of the regression results is discussed, the observed effects are summarised, and the potential for incorporating rheological principles into process control is considered, Guidelines are established to improve industrial grinding operations based on knowledge of the rheological effects. This study confirms some trends in the effect of slurry rheology on grinding reported in the literature, and extends these to a broader understanding of the relationship between slurry properties and rheology, and their effects on industrial milling performance. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Ten surveys of the ball milling circuit at the Mt Isa Mines (MIM) Copper Concentrator were conducted aiming to identify any changes in slurry theology caused by the use of chrome balls charge, and the associated effect on grinding performance. Slurry theology was measured using an on-line viscometer. The data were mass balanced and analysed with statistical tools. Comparison of the rheogram demonstrated that slurry density and fines content affected slurry rheology significantly, while the effect of the chrome ball charge being negligible. Statistical analysis showed the effects of mill throughput and cyclone efficiency on the Grinding Index (a term describing the overall breakage). There was no difference in the Grinding Index between using the chrome ball charge and the ordinary steel ball charge. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
O Projeto de Descentraliza????o da Concess??o de Benef??cios ?? parte integrante do Plano de A????es do Minist??rio da Previd??ncia e Assist??ncia Social, do Instituto Nacional do Seguro Social e da Dataprev. A Dataprev ?? respons??vel pelo desenvolvimento e produ????o de solu????es em tecnologia de informa????o para a Previd??ncia Social. Como tal, lutamos com entusiasmo e determina????o para desenvolver sistemas e aplicativos que melhorem o n??vel do atendimento ao p??blico, assim como organizar o grande volume de dados armazenados nos computadores previdenci??rios. Por isso, dentro da evolu????o do PRISMA - aplicativo de capta????es de dados, residente nos Postos do Seguro Social - a Dataprev regionalizou o processamento da concess??o dos benef??cios. O Projeto proporcionou a descentraliza????o da base de dados e de parte do processamento centralizado, utilizando os equipamentos SMP6400 instalando-se o SGBD Oracle e SO Unix. - Assim que os dados dos benefici??rios s??o colocados no sistema, s??o confrontados com as informa????es existentes do Cadastro Nacional de Informa????es Sociais (CNIS) e no Cadastro Nacional de Benef??cios, validando-se o processo ou identificando-se de imediato qualquer falha ou duplicidade. - O uso da nova tecnologia marcou o in??cio da utiliza????o de uma arquitetura cliente-servidor, transportando as rotinas de concess??o e c??lculo da data de in??cio do benef??cio - DIB e renda mensal inicial - RMI para os computadores regionais e otimiza o processo on-line. O processamento descentralizado mant??m ainda os mesmos crit??rios de seguran??a existentes no sistema central. - Uma iniciativa que coloca o INSS em sintonia com o que h?? de moderno em tecnologia de banco de dados e conectividade
Resumo:
The increasing availability of mobility data and the awareness of its importance and value have been motivating many researchers to the development of models and tools for analyzing movement data. This paper presents a brief survey of significant research works about modeling, processing and visualization of data about moving objects. We identified some key research fields that will provide better features for online analysis of movement data. As result of the literature review, we suggest a generic multi-layer architecture for the development of an online analysis processing software tool, which will be used for the definition of the future work of our team.
Resumo:
Artigo traduzido para mandarim, publicado em Nature and Human Life E-Academic Magazine, 6 (2015), pp. 19-32. http://www.ziranyurensheng.org/current-2961621002.html.
Resumo:
use of additives (Mg/P and nitrification inhibitor dicyandiamide - DCD), on nitrous oxide emission during swine slurry composting. The experiment was run in duplicate; the gas was monitored for 30 days in different treatments (control, DCD, Mg/P and DCD + Mg/P). Nitrous oxide emissions rate (mg of N2O-N.day-1) and the accumulated emissions were calculated to compare the treatments. Results has shown that emissions of N-N2O were reduced by approximately 70, 46 and 96% through the additions of DCD, MgCl2.6H2O + H3PO4 and both additives, respectively, compared to the control. Keywords Composting; swine slurry; additives; nitrous