940 resultados para Jornais impressos e on-line


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estudo sobre os jornais étnicos em língua portuguesa destinados a brasileiros que migram para os Estados Unidos. O objetivo é mapear os jornais impressos de Massachusetts e identificar se o jornal Brazilian Times, o mais antigo a circular no estado, colabora com a aculturação e/ou enraizamento das pessoas que trocaram as terras brasileiras pelo nordeste dos Estados Unidos. A metodologia englobou pesquisa bibliográfica, de modo a fundamentar o estudo com os conceitos históricos e atuais de cidadania, comunidade e a problemática da migração; pesquisa documental; entrevistas semi-estruturadas e análise de conteúdo do jornal. A pesquisa documental reuniu dados sobre jornais étnicos e a situação migratória Brasil - Estados Unidos. A análise de conteúdo tomou como corpus 11 edições do jornal publicadas de 2001 a 2011, a qual procurou identificar e analisar os temas jornalísticos priorizados e o tipo de abordagem, além de comparar o espaço destinado ao jornalismo e à publicidade e traçar o perfil dos anunciantes. As entrevistas semi-estruturadas foram realizadas com editores do jornal e com leitores brasileiros que vivem da região onde, preferencialmente, circula a edição impressa. Conclui-se que o Brazilian Times tanto ajuda a preservar aspectos culturais do Brasil, quanto colabora com a integração dos migrantes ao modo de vida estadunidense. Se por um lado tem como principal objetivo o lucro, visto que reserva a maior parte do espaço para anunciantes, por outro apresenta característica típica de jornais étnicos, tal como ênfase nos assuntos de interesse dos migrantes (minoria), o que lhe confere um espaço como formador de opinião junto à comunidade brasileira nos Estados Unidos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) plays an important role in the life cycle of the Trypanosoma cruzi, and an immobilized enzyme reactor (IMER) has been developed for use in the on-line screening for GAPDH inhibitors. An IMER containing human GAPDH has been previously reported; however, these conditions produced a T. cruzi GAPDH-IMER with poor activity and stability. The factors affecting the stability of the human and T. cruzi GAPDHs in the immobilization process and the influence of pH and buffer type on the stability and activity of the IMERs have been investigated. The resulting T. cruzi GAPDH-IMER was coupled to an analytical octyl column, which was used to achieve chromatographic separation of NAD+ from NADH. The production of NADH stimulated by D-glyceraldehyde-3-phosphate was used to investigate the activity and kinetic parameters of the immobilized T. cruzi GAPDH. The Michaelis-Menten constant (K-m) values determined for D-glyceraldehyde-3-phosphate and NAD(+) were K-m = 0.5 +/- 0.05 mM and 0.648 +/- 0.08 mM, respectively, which were consistent with the values obtained using the non-immobilized enzyme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS). CATT-BRAMS is an on-line transport model fully consistent with the simulated atmospheric dynamics. Emission sources from biomass burning and urban-industrial-vehicular activities for trace gases and from biomass burning aerosol particles are obtained from several published datasets and remote sensing information. The tracer and aerosol mass concentration prognostics include the effects of sub-grid scale turbulence in the planetary boundary layer, convective transport by shallow and deep moist convection, wet and dry deposition, and plume rise associated with vegetation fires in addition to the grid scale transport. The radiation parameterization takes into account the interaction between the simulated biomass burning aerosol particles and short and long wave radiation. The atmospheric model BRAMS is based on the Regional Atmospheric Modeling System (RAMS), with several improvements associated with cumulus convection representation, soil moisture initialization and surface scheme tuned for the tropics, among others. In this paper the CATT-BRAMS model is used to simulate carbon monoxide and particulate material (PM(2.5)) surface fluxes and atmospheric transport during the 2002 LBA field campaigns, conducted during the transition from the dry to wet season in the southwest Amazon Basin. Model evaluation is addressed with comparisons between model results and near surface, radiosondes and airborne measurements performed during the field campaign, as well as remote sensing derived products. We show the matching of emissions strengths to observed carbon monoxide in the LBA campaign. A relatively good comparison to the MOPITT data, in spite of the fact that MOPITT a priori assumptions imply several difficulties, is also obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work deals with neural network (NN)-based gait pattern adaptation algorithms for an active lower-limb orthosis. Stable trajectories with different walking speeds are generated during an optimization process considering the zero-moment point (ZMP) criterion and the inverse dynamic of the orthosis-patient model. Additionally, a set of NNs is used to decrease the time-consuming analytical computation of the model and ZMP. The first NN approximates the inverse dynamics including the ZMP computation, while the second NN works in the optimization procedure, giving an adapted desired trajectory according to orthosis-patient interaction. This trajectory adaptation is added directly to the trajectory generator, also reproduced by a set of NNs. With this strategy, it is possible to adapt the trajectory during the walking cycle in an on-line procedure, instead of changing the trajectory parameter after each step. The dynamic model of the actual exoskeleton, with interaction forces included, is used to generate simulation results. Also, an experimental test is performed with an active ankle-foot orthosis, where the dynamic variables of this joint are replaced in the simulator by actual values provided by the device. It is shown that the final adapted trajectory follows the patient intention of increasing the walking speed, so changing the gait pattern. (C) Koninklijke Brill NV, Leiden, 2011

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To determine the effect of slurry rheology on industrial grinding performance, 45 surveys were conducted on 16 full-scale grinding mills in five sites. Four operating variables - mill throughput, slurry density, slurry viscosity and feed fines content-were investigated. The rheology of the mill discharge slurries was measured either on-line or off-line, and the data were processed using a standard procedure to obtain a full range of flow curves. Multi-linear regression was employed as a statistical analysis tool to determine whether or not rheological effects exert an influence on industrial grinding, and to assess the influence of the four mill operating conditions on mill performance in terms of the Grinding Index, a criterion describing the overall breakage of particles across the mill. The results show that slurry rheology does influence industrial grinding. The trends of these effects on Grinding Index depend upon the rheological nature of the slurry-whether the slurries are dilatant or pseudoplastic, and whether they exhibit a high or low yield stress. The interpretation of the regression results is discussed, the observed effects are summarised, and the potential for incorporating rheological principles into process control is considered, Guidelines are established to improve industrial grinding operations based on knowledge of the rheological effects. This study confirms some trends in the effect of slurry rheology on grinding reported in the literature, and extends these to a broader understanding of the relationship between slurry properties and rheology, and their effects on industrial milling performance. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ten surveys of the ball milling circuit at the Mt Isa Mines (MIM) Copper Concentrator were conducted aiming to identify any changes in slurry theology caused by the use of chrome balls charge, and the associated effect on grinding performance. Slurry theology was measured using an on-line viscometer. The data were mass balanced and analysed with statistical tools. Comparison of the rheogram demonstrated that slurry density and fines content affected slurry rheology significantly, while the effect of the chrome ball charge being negligible. Statistical analysis showed the effects of mill throughput and cyclone efficiency on the Grinding Index (a term describing the overall breakage). There was no difference in the Grinding Index between using the chrome ball charge and the ordinary steel ball charge. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing availability of mobility data and the awareness of its importance and value have been motivating many researchers to the development of models and tools for analyzing movement data. This paper presents a brief survey of significant research works about modeling, processing and visualization of data about moving objects. We identified some key research fields that will provide better features for online analysis of movement data. As result of the literature review, we suggest a generic multi-layer architecture for the development of an online analysis processing software tool, which will be used for the definition of the future work of our team.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artigo traduzido para mandarim, publicado em Nature and Human Life E-Academic Magazine, 6 (2015), pp. 19-32. http://www.ziranyurensheng.org/current-2961621002.html.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A divulgação quase instantânea da informação proporcionada pelas tecnologias de informação e comunicação banalizou praticamente o conceito de distância. Seguiu-se inevitavelmente a adaptação dos meios de comunicação, pelo que a informação pode chegar a todos nós de formas variadas como a imprensa escrita, falada e on line. Os livros, os jornais, a internet e os outros meios de comunicação encontram-se repletos de dados, de tabelas e gráficos, os quais, independentemente do meio utilizado, trazem a informação até nós, numa linguagem estatística que propicia objetividade e simplificação da mesma, para quem a sabe interpretar. Essa informação pode ter origem em todas as áreas da ciência e é passível de ser utilizada em vários contextos: demografia, pesquisas eleitorais, estudos financeiros, índices de desemprego, controlo de qualidade, custo de vida, tendências de mercado em relação a produtos e marcas, evolução de audiências, indústria, recursos humanos, saúde, pesquisas de mercado e de opinião, etc. Isso justifica a necessidade de formação estatística para todos, no sentido de promover uma participação ativa, crítica e esclarecida por parte de qualquer cidadão em relação a resultados que lhe são apresentados (Fernandes, Sousa e Ribeiro, 2004). O reforço da estatística nos ensinos básico e secundário foi por isso inevitável, e para confirmá-lo basta compararem-se os programas das décadas de 80 e de 90 da disciplina de matemática em todos os anos de escolaridade. O Curriculum and Evaluation Standards for School Mathematics, publicadas em 1989 pelo NCTM veio introduzir normas relativas à estatística e às probabilidades para todos os níveis de ensino incentivando bastante a utilização de meios e métodos inovadores. A estatística é área da matemática que mais se tem desenvolvido nos últimos 30 anos. [introdução]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de Doutoramento em Didática e Formação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Ciências da Comunicação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O livro Proposta Metodológica de Macroeducação é uma coletânea publicada pela Empresa Brasileira de Pesquisa Agropecuária, Embrapa. É o volume 2 da série ?Educação Ambiental para o desenvolvimento Sustentável?, que reúne sete volumes, todos eles obras coletâneas. Levam o mérito de serem realizados de forma participativa, desde a escolha dos títulos, dos temas às revisões. Quanto à abrangência do livro resenhado, desde 2004 foram impressos 8.074 exemplares. A leitura se faz obrigatoriamente em meio impresso, uma vez que não está disponível para download, até o momento. O seu preço é acessível (R$ 18,00, em média) nas livrarias da Embrapa, mas exemplares com valores menores são encontrados nas principais livrarias on-line, chegando a custar R$ 5,40.