924 resultados para Account errors
Resumo:
Latin America has recently experienced three cycles of capital inflows, the first two ending in major financial crises. The first took place between 1973 and the 1982 ‘debt-crisis’. The second took place between the 1989 ‘Brady bonds’ agreement (and the beginning of the economic reforms and financial liberalisation that followed) and the Argentinian 2001/2002 crisis, and ended up with four major crises (as well as the 1997 one in East Asia) — Mexico (1994), Brazil (1999), and two in Argentina (1995 and 2001/2). Finally, the third inflow-cycle began in 2003 as soon as international financial markets felt reassured by the surprisingly neo-liberal orientation of President Lula’s government; this cycle intensified in 2004 with the beginning of a (purely speculative) commodity price-boom, and actually strengthened after a brief interlude following the 2008 global financial crash — and at the time of writing (mid-2011) this cycle is still unfolding, although already showing considerable signs of distress. The main aim of this paper is to analyse the financial crises resulting from this second cycle (both in LA and in East Asia) from the perspective of Keynesian/ Minskyian/ Kindlebergian financial economics. I will attempt to show that no matter how diversely these newly financially liberalised Developing Countries tried to deal with the absorption problem created by the subsequent surges of inflow (and they did follow different routes), they invariably ended up in a major crisis. As a result (and despite the insistence of mainstream analysis), these financial crises took place mostly due to factors that were intrinsic (or inherent) to the workings of over-liquid and under-regulated financial markets — and as such, they were both fully deserved and fairly predictable. Furthermore, these crises point not just to major market failures, but to a systemic market failure: evidence suggests that these crises were the spontaneous outcome of actions by utility-maximising agents, freely operating in friendly (light-touched) regulated, over-liquid financial markets. That is, these crises are clear examples that financial markets can be driven by buyers who take little notice of underlying values — investors have incentives to interpret information in a biased fashion in a systematic way. ‘Fat tails’ also occurred because under these circumstances there is a high likelihood of self-made disastrous events. In other words, markets are not always right — indeed, in the case of financial markets they can be seriously wrong as a whole. Also, as the recent collapse of ‘MF Global’ indicates, the capacity of ‘utility-maximising’ agents operating in unregulated and over-liquid financial market to learn from previous mistakes seems rather limited.
Resumo:
SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
To determine the prevalence of refractive errors in the public and private school system in the city of Natal, Northeastern Brazil. Methods: Refractometry was performed on both eyes of 1,024 randomly selected students, enrolled in the 2001 school year and the data were evaluated by the SPSS Data Editor 10.0. Ametropia was divided into: 1- from 0.1 to 0.99 diopter (D); 2- 1.0 to 2.99D; 3- 3.00 to 5.99D and 4- 6D or greater. Astigmatism was regrouped in: I- with-the-rule (axis from 0 to 30 and 150 to 180 degrees), II- against-the-rule (axis between 60 and 120 degrees) and III- oblique (axis between > 30 and < 60 and >120 and <150 degrees). The age groups were categorized as follows, in: 1- 5 to 10 years, 2- 11 to 15 years, 3- 16 to 20 years, 4- over 21 years. Results: Among refractive errors, hyperopia was the most common with 71%, followed by astigmatism (34%) and myopia (13.3%). Of the students with myopia and hyperopia, 48.5% and 34.1% had astigmatism, respectively. With respect to diopters, 58.1% of myopic students were in group 1, and 39% distributed between groups 2 and 3. Hyperopia were mostly found in group 1 (61.7%) as well as astigmatism (70.6%). The association of the astigmatism axes of both eyes showed 92.5% with axis with-the-rule in both eyes, while the percentage for those with axis againstthe- rule was 82.1% and even lower for the oblique axis (50%). Conclusion: The results found differed from those of most international studies, mainly from the Orient, which pointed to myopia as the most common refractive error, and corroborates the national ones, with the majority being hyperopia
Resumo:
As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.
Resumo:
This is a study descriptive cross-sectional and quantitative approaches, which aimed to analyze the association between hospital infection rate for insertion, maintenance of central venous catheter and the breakdown of protocols (rules and routines) by health professionals assisting patients in the ICU of a university hospital in Natal / RN. The process of data collection was through observation with structured form, refers to medical records and structured questionnaires with health professionals. The results were organized, tabulated, categorized and analyzed using SPSS 14.0. The characterization of the subjects was performed using descriptive and inferential statistics, taking into account the nature of the variables, with analysis of variance (ANOVA) and Spearman correlation test, it was a discussion of the information obtained, considering the mean, standard deviation, coefficient of variance and standard error. The variables that showed a higher level of correlation were treated with the application of significance tests. As the results, 71% of participants were female and 29% male, age ranged from 18 to 85 years (52.6 ± 22.5). The insertion, there was a variation from 0 to 5 errors (1.2 ± 1.4), during maintenance, the average was 2.3 ± 0.9 errors, ranging from 0 to 4. During the insertion and maintenance of CVC, patients who had been an infection ranging from 2 to 9 mistakes (4.2 ± 1.7), since those who did not show the variation goes from 0 to 5 errors (2, 8 ± 1.5). The correlation coefficient between the risk of infection throughout the process and the risk of infection at the insertion showed strong and significant (r = 0.845 p = 0.000) and in relation to risk of infection in maintenance was moderate and significant (r = 0.551 p = 0.001). The mistakes made by professionals in the procedures for insertion and maintenance of the catheter, associated with other conditions, shown as a risk factor for the of IH
Resumo:
The Global Positioning System (GPS) transmits signals in two frequencies. It allows the correction of the first order ionospheric effect by using the ionosphere free combination. However, the second and third order ionospheric effects, which combined may cause errors of the order of centimeters in the GPS measurements, still remain. In this paper the second and third order ionospheric effects, which were taken into account in the GPS data processing in the Brazilian region, were investigated. The corrected and not corrected GPS data from these effects were processed in the relative and precise point positioning (PPP) approaches, respectively, using Bernese V5.0 software and the PPP software (GPSPPP) from NRCAN (Natural Resources Canada). The second and third order corrections were applied in the GPS data using an in-house software that is capable of reading a RINEX file and applying the corrections to the GPS observables, creating a corrected RINEX file. For the relative processing case, a Brazilian network with long baselines was processed in a daily solution considering a period of approximately one year. For the PPP case, the processing was accomplished using data collected by the IGS FORT station considering the period from 2001 to 2006 and a seasonal analysis was carried out, showing a semi-annual and an annual variation in the vertical component. In addition, a geographical variation analysis in the PPP for the Brazilian region has confirmed that the equatorial regions are more affected by the second and third order ionospheric effects than other regions.
Resumo:
Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. This paper presents a novel approach to solve robust parameter estimation problem for nonlinear model with unknown-but-bounded errors and uncertainties. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach. Copyright (C) 2000 IFAC.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This research aimed to contribute to the characterization of a neuropsychological phenotype of adolescents with Down Syndrome (DS). A multicases study of six adolescents (three males and three females, aged 13 to 14 years) diagnosed with DS and treated at two institutions in the city of Natal (Brazil), was conducted. Participants were assessed using the methodological approach developed by Luria, which is composed by four complementary stages. The first one aimed to investigate the qualitative impact of DS in school life and social development of the adolescents; dimensions of behavior and social-affective aspects of the members of the study were investigated. In the second stage participants performed a battery of neuropsychological tests in order to identify strengths and weaknesses in their cognitive functioning. The third stage was incorporated into the second in order to analyze the quality of the activity of the participants along the quantitative evaluation, highlighting strategies used, errors produced among other indicators. Lastly, the fourth stage refers to the intervention with the participants. Although this is not a specific objective of the study, it is argued that the outcome of this research will subsidize the practice of different professionals working with this clinical group. The results of the first stage emphasized the presence of difficulties in social relationships and in school life of observed adolescents. In turn, the second and third stages pointed out to the presence of difficulties in tasks involving logical and abstract thinking, as well as difficulties in expressive language. In relation to visual memory, we observed a better performance in activities of lower complexity, ie, with less interference of executive functioning, particularly in terms of the functions of planning and initiative. Finally, it was found motor and mental retardation, affecting significantly the performance related to different cognitive areas. The results highlighted here can be considered as subsidies for future interventions, suggesting the need for developping projects that take into account different aspects constituents of the human subject, involving not only the individual with developmental changes, as well as their families, teachers, schools and society in general
Resumo:
This study aimed to identify and review of the conceptual differences presented by authors of books, focusing on the theme of electronic configuration. It shows the changing concepts of electronic configuration, its implications for the cognitive development of students and their relations with the contemporary world. We identified possible obstacles in books generated in the search for simplifications, situations of different concepts of energy in the electron configuration for sublevels. For this analysis was carried out in several books, and some other general chemistry and inorganic chemistry without distinguishing between level of education, whether secondary or higher. It was found that some books for school books corroborated with higher education, others do not. To check the consistency of what was discussed, it was a survey of 30 teachers, it was found divergent points of responses, particularly with respect to the energy sublevels and authorship of the diagram which facilitated the electron configuration. It was found that the total 22professores, ie, 73,33% answered correctly on the energy sublevel more calcium (Ca) and 80%, ie, 24 teachers responded incorrectly on the iron. As for the authorship of the diagram used to facilitate the electronic configuration, we obtained 93, 33% of teachers indicated that they followed a diagram, and this was called "Diagram of Linus Pauling," teacher 01, 3,33%, indicated that the diagram was authored by Madelung and 01, 3,33%, did not respond to question. Was observed that it is necessary a more detailed assessment of ancient writings, as the search for simplifications and generalizations, not so plausible, lead to errors and consequences negative for understanding the properties of many substances. It was found that quantum mechanics combined with spectroscopic data should be part of a more thorough analysis, especially when it extends situations atoms monoelectronicpolieletrônicos to describe atoms, because factors such as effective nuclear charge and shielding factor must be taken into consideration, because interactions there is inside an atom, described by a set ofquantum numbers, sometimes not taken into account
Resumo:
The field of education is very rich and allows us to research in various aspects. The area of chemical education has been growing more and more, and an important aspect that has been researching this area is about the learning difficulties of students. The approach of the themes atomic structure and chemical bonds are developed in high school and have many problems that are often brought to higher education becoming an obstacle to the advancement of learning. It is necessary for these initial themes - the atomic structure and chemical bonds - are well understood by the student to the other contents of Chemistry will be understood more easily. This paper aims to describe, analyze errors and difficulties presented in the assessments of the discipline Atomic and Molecular Architecture, the students of the degree course in Chemistry - EAD, with respect to the contents of " Atomic Structure and Chemical Bonding ", by of the assessments made by the students and the Virtual Learning Environment (VLE), taking into account the activities , discussion forum and access to materials . AVA allows obtaining reports which were used to analyze regarding access / participation to assess their contribution to learning and its relation to the final result (pass / fail). It was observed that the most frequent errors in the assessments are related to the early part of the chemistry that is the understanding of atomic structure and evolution models. Students who accessed the extra material and participated in the activities and forums were students who achieved success in the course. Ie, the difficulties were emerging and the use of available teaching strategies, students could minimize such difficulties, making their performance in activities and assessments were better. Was also observed by attending the AVA, the discipline began with a large withdrawal from the page access as well as the frequency of face- evidence from observation in Listing presence of classroom assessments
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Neste artigo analisam-se os laudos e dados obtidos das investigações de acidentes graves e fatais do trabalho efetuadas pelo Instituto de Criminalística (IC), Regional de Piracicaba. Foram analisados 71 laudos de acidentes ocorridos em 1998, 1999 e 2000. Os acidentes envolvendo máquinas representam 38,0%, seguido pelas quedas de altura (15,5%) e em terceiro lugar os causados por corrente elétrica (11,3%). Os laudos concluem que 80,0% dos acidentes são causados por atos inseguros cometidos pelos trabalhadores, enquanto que a falta de segurança ou condição insegura responde por 15,5% dos casos. A responsabilização das vítimas ocorre mesmo em situações de elevado risco em que não são adotadas as mínimas condições de segurança, com repercussão favorável ao interesse dos empregadores. Observa-se que estas conclusões refletem os modelos explicativos tradicionais, reducionistas, em que os acidentes são fenômenos simples, de causa única, centrada via de regra nos erros e falhas das próprias vítimas. A despeito das críticas que tem recebido nas duas últimas décadas no meio técnico e acadêmico, esta concepção mantém-se hegemônica prejudicando o desenvolvimento de políticas preventivas e a melhoria das condições de trabalho.