53 resultados para Acurácia Posicional
Resumo:
O objetivo do nosso projeto foi determinar as características dos recém nascidos com crises convulsivas internados em unidades de terapia intensiva. Foi realizada uma pesquisa multicêntrica, observacional, prospectiva, cuja população alvo foi os recém nascidos com crises convulsivas internados em unidades de terapia intensiva, envolvendo equipe multidisciplinar constituída por neurologista infantil, neonatologistas, intensivistas pediátricos, enfermeiros, técnicos de enfermagem e fisioterapêutas. As crises foram definidas pelo critério clínico, com classificação de Volpe. Variáveis relacionadas à gestação, ao parto, características dos recém nascidos, aspectos das crises convulsivas e mortalidade foram analisadas. Estatística: descritiva (freqüências, medidas de tendência central e dispersão) e análise (teste de probabilidade, teste de risco e de acurácia). Comparamos as crises clínicas entre os recém nascidos de termo e pretermo e observamos diferenças com significância estatística na idade de início das crises, mais tardia nos prematuros, na etiologia predominante: hemorragia peri-intraventricular no prematuro e encefalopatia hipóxico-isquêmica no termo e tipo clínico de crise, clônica no pretermo e sutil no termo. Os testes de acurácia utilizados para determinar se o tipo clínico de crise convulsiva é predictor da etiologia da mesma não revelaram resultados positivos. Quanto às características associadas à mortalidade de prematuros com crise convulsiva, observamos associação entre ventilação mecânica e pneumonia com a mortalidade. Existem diferenças clínicas quando comparamos os recém nascidos pretermo e de termo com crises convulsivas, confirmando dados da literatura
Resumo:
The present study aims to analyse, in different levels of demand, what is the best layout strategy to adopt for the small metallic shipbuilding. To achieve this purpose, three simulation models are developed for analyze these production strategies under the positional, cellular and linear layouts. By the use of a simulation tool for compare the scenarios, Chwif and Medina (2010) and Law (2009)´s methodologies were adapted that includes three phases: conception, implementation and analysis. In conception real systems were represented by process mapping according to time, material resources and human resources variables required for each step of the production process. All of this information has been transformed in the cost variable. Data were collected from three different production systems, two located in Natal RN with cellular and positional layouts and one located in Belém-PA with linear layout. In the implementation phase, the conceptual models were converted in computacional models through the tool Rockwell Software Arena ® 13.5 and then validated. In the analysis stage the production of 960 ships in a year vessels were simulated for each layout noting that, for a production of until 80 units positional layout is the most recommended, between 81 and 288 units the cellular layout and more than 289 units the linear layout
Resumo:
The use of the maps obtained from remote sensing orbital images submitted to digital processing became fundamental to optimize conservation and monitoring actions of the coral reefs. However, the accuracy reached in the mapping of submerged areas is limited by variation of the water column that degrades the signal received by the orbital sensor and introduces errors in the final result of the classification. The limited capacity of the traditional methods based on conventional statistical techniques to solve the problems related to the inter-classes took the search of alternative strategies in the area of the Computational Intelligence. In this work an ensemble classifiers was built based on the combination of Support Vector Machines and Minimum Distance Classifier with the objective of classifying remotely sensed images of coral reefs ecosystem. The system is composed by three stages, through which the progressive refinement of the classification process happens. The patterns that received an ambiguous classification in a certain stage of the process were revalued in the subsequent stage. The prediction non ambiguous for all the data happened through the reduction or elimination of the false positive. The images were classified into five bottom-types: deep water; under-water corals; inter-tidal corals; algal and sandy bottom. The highest overall accuracy (89%) was obtained from SVM with polynomial kernel. The accuracy of the classified image was compared through the use of error matrix to the results obtained by the application of other classification methods based on a single classifier (neural network and the k-means algorithm). In the final, the comparison of results achieved demonstrated the potential of the ensemble classifiers as a tool of classification of images from submerged areas subject to the noise caused by atmospheric effects and the water column
Resumo:
Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture
Resumo:
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Resumo:
O Laboratório de Sistemas Inteligentes do Departamento de Engenharia de Computação e Automação da Universidade Federal do Rio Grande do Norte - UFRN -tem como um de seus projetos de pesquisa -Robosense -a construção de uma plataforma robótica móvel. Trata-se de um robô provido de duas rodas, acionadas de forma diferencial, dois braços, com 5 graus de liberdade cada, um cinturão de sonares e uma cabeça estéreo. Como objetivo principal do projeto Robosense, o robô deverá ser capaz de navegar por todo o prédio do LECA, desviando de obstáculos. O sistema de navegação do robô, responsável pela geração e seguimento de rotas, atuará em malha fechada. Ou seja, sensores serão utilizados pelo sistema com o intuito de informar ao robô a sua pose atual, incluindo localização e a configuração de seus recursos. Encoders (sensores especiais de rotação) foram instalados nas rodas, bem como em todos os motores dos dois braços da cabeça estéreo. Sensores de fim-de-curso foram instalados em todas as juntas da cabeça estéreo para que seja possível sua pré-calibração. Sonares e câmeras também farão parte do grupo de sensores utilizados no projeto. O robô contará com uma plataforma composta por, a princípio, dois computadores ligados a um barramento único para uma operação em tempo real, em paralelo. Um deles será responsável pela parte de controle dos braços e de sua navegação, tomando como base as informações recebidas dos sensores das rodas e dos próximos objetivos do robô. O outro computador processará todas as informações referentes à cabeça estéreo do robô, como as imagens recebidas das câmeras. A utilização de técnicas de imageamento estéreo torna-se necessária, pois a informação de uma única imagem não determina unicamente a posição de um dado ponto correspondente no mundo. Podemos então, através da utilização de duas ou mais câmeras, recuperar a informação de profundidade da cena. A cabeça estéreo proposta nada mais é que um artefato físico que deve dar suporte a duas câmeras de vídeo, movimentá-las seguindo requisições de programas (softwares) apropriados e ser capaz de fornecer sua pose atual. Fatores como velocidade angular de movimentação das câmeras, precisão espacial e acurácia são determinantes para o eficiente resultado dos algoritmos que nesses valores se baseiam
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
The present work presents a contribution in the study of modelings of transference of heat for foods submitted to the experimental tests in the considered solar oven, where the best modeling for the beefburger of chicken in study was evaluated, comparing the results, considering this food as a half-infinite(1er object considered model) and,after that, considered the chicken beefburger as a plain plate in transient regimen in two distinct conditions: not considering and another model considering the contribution of the generation term, through the Criterion of Pomerantsev. The Sun, beyond life source, is the origin of all the energy forms that the man comes using during its history and can be the reply for the question of the energy supplying in the future, a time that learns to use to advantage in rational way the light that this star constantly special tax on our planet. Shining more than the 5 billion years, it is calculated that the Sun still in them will privilege for others 6 billion years, or either, it is only in the half of its existence and will launch on the Earth, only in this year, 4000 times more energy that we will consume. Front to this reality, would be irrational not to search, by all means technical possible, to use to advantage this clean, ecological and gratuitous power plant. In this dissertation evaluate the performance of solar cooker of the type box. Laboratory of Solar Energy of the Federal University of the Great River of North - UFRN was constructed by the group (LES) a model of solar stove of the type box and was tested its viability technique, considering modeling foods submitted when baking in the solar oven, the cooker has main characteristic the easiness of manufacture and assembly, the low cost (was used material accessible composition to the low income communities) and simplicity in the mechanism of movement of the archetype for incidence of the direct solar light. They had been proposals modeling for calculations of food the minimum baking time, considering the following models of transference of heat in the transient state: object the halfinfinite, plain plate and the model of the sphere to study the necessary temperature for the it bakes of bread (considering spherical geometry). After evaluate the models of transmission of heat will be foods submitted you the processes of to it bakes of, the times gotten for the modeling with the experimental times of it bakes in the solar oven had been compared, demonstrating the modeling that more good that it portraies the accuracies of the results of the model
Resumo:
The composition of petroleum may change from well to well and its resulting characteristics influence significantly the refine products. Therefore, it is important to characterize the oil in order to know its properties and send it adequately for processing. Since petroleum is a multicomponent mixture, the use of synthetic mixtures that are representative of oil fractions provides a better understand of the real mixture behavior. One way for characterization is usually obtained through correlation of physico-chemical properties of easy measurement, such as density, specific gravity, viscosity, and refractive index. In this work new measurements were obtained for density, specific gravity, viscosity, and refractive index of the following binary mixtures: n-heptane + hexadecane, cyclohexane + hexadecane, and benzene + hexadecane. These measurements were accomplished at low pressure and temperatures in the range 288.15 K to 310.95 K. These data were applied in the development of a new method of oil characterization. Furthermore, a series of measurements of density at high pressure and temperature of the binary mixture cyclohexane + n-hexadecane were performed. The ranges of pressure and temperature were 6.895 to 62.053 MPa and 318.15 to 413.15 K, respectively. Based on these experimental data of compressed liquid mixtures, a thermodynamic modeling was proposed using the Peng-Robinson equation of state (EOS). The EOS was modified with scaling of volume and a relatively reduced number of parameters were employed. The results were satisfactory demonstrating accuracy not only for density data, but also for isobaric thermal expansion and isothermal compressibility coefficients. This thesis aims to contribute in a scientific manner to the technological problem of refining heavy fractions of oil. This problem was treated in two steps, i.e., characterization and search of the processes that can produce streams with economical interest, such as solvent extraction at high pressure and temperature. In order to determine phase equilibrium data in these conditions, conceptual projects of two new experimental apparatus were developed. These devices consist of cells of variable volume together with a analytical static device. Therefore, this thesis contributed with the subject of characterization of hydrocarbons mixtures and with development of equilibrium cells operating at high pressure and temperature. These contributions are focused on the technological problem of refining heavy oil fractions
Resumo:
This paper aims to describe the construction and validation of a notebook of activities whose content is a didactic sequence that makes use of the study of ancient numbering systems as compared to the object of our decimal positional numbering system Arabic. This is on the assumption that the comparison with a system different from our own might provide a better understanding of our own numbering system, but also help in the process of arithmetic operations of addition, subtraction and multiplication, since it will force us to think in ways that are not routinely object of our attention. The systems covered in the study were the Egyptian hieroglyphic system of numbering, the numbering system Greek alphabet and Roman numbering system, always compared to our numbering system. The following teachung is presented structured in the form of our activities, so-called exercise set and common tasks around a former same numbering system. In its final stage of preparation, the sequence with the participation of 26 primary school teachers of basic education
Resumo:
Massively Multiplayer Online Role-Playing Games (MMORPGs) are role-playing games that, through the Internet, can integrate thousands of players interacting at the same time in at least one virtual world. This way, these games can provide, further than fun, a greater familiarity with the additional language and opportunity to improve the linguistic proficiency in a real context. Hence, what is proposed in this study is extended knowledge about the learning of an additional language mediated by MMORPGs for teachers to know how, if relevant, to present, use or encourage this practice to their students. Based on this major purpose, we seek to answer the following research questions: (a) what distinguishes the learning profile of the gamers and non-gamers; (b) if MMORPGs can, through a hybrid and systematic approach, assist the development of proficiency of the additional language and (c) what the think-aloud protocols show about the learning mediated by the MMORPG Allods Online. Following an experimental method (NUNAN, 1997), 16 students of the curricular component Reading and Writing Practices in English Language have comprised the control group and 17 students of the same class formed the experimental group and were submitted to a pre and post-test adapted from the Key English Test (KET) by the Cambridge University (2008). The tests were conducted before and after a period of 5 weeks of 3 hours of practice with Allods Online a week (experimental group), and classes of the curricular component (both groups). A quantitative analysis of the questionnaires about the exposure to English profiles of the participants, a quantitative analysis of the tests scores and a qualitative analysis of the thinkaloud protocols collected during the experiment were conducted based on the theories of (a) motivation (GARDNER, 1985, WILLIAMS & BURDEN, 1997, BROWN, 2007, HERCULANO-HOUZEL, 2005); (b) active learning (GASS, 1997, GEE, 2008, MATTAR, 2010); (c) interaction and collaborative learning (KRASHEN, 1991, GASS, 1997, VYGOTSKY, 1978); (d) situated learning (DAMASIO, 1994; 1999; 2003, BROWN, 2007, GEE, 2003) and (e), tangential learning (PORTNOW, 2008; MATTAR, 2010). The results indicate that the participants of the experimental group (gamers) seem to be more engaged in tangential English learning activities, such as playing games, listening to music in English, communicating with foreigners and reading in English. We also deduced that the period of experiment possibly generated positive results on the gamers proficiency scores, mainly in the parts related to orthographic development, reading and comprehension, writing with focus on content and orthographic accuracy. Lastly, the think-aloud protocols presented evidences that the gamers have engaged in active English language learning, they have interacted in English with other players, and learned linguistic aspects through the experience with the MMORPG Allods Online
Resumo:
VoiceThread (VT) is a collaborative and asynchronous web 2.0 tool, which permits the creation of oral presentations with the help of images, documents, texts and voice, allowing groups of people to browse and contribute with comments using several options: voice (microphone or cell phone), text and audio-file or video (webcam) (BOTTENTUIT JUNIOR, LISBÔA E COUTINHO, 2009). The hybrid experience with VoiceThread allows learners to plan their speech before recording it, without the pressure often existent in the classroom. Furthermore, the presentations can be recorded several times, enabling students to listen to them, notice the gaps in their oral production (noticing) and edit innumerous times before publishing them online. In this perspective, oral production is seen as a process of L2 acquisition, not only as practice of already existent knowledge, because it can stimulate the learner to process the language syntactically (SWAIN, 1985; 1995). In this context, this study aims to verify if there is a relation between the oral production of the learners more specifically the grammatical accuracy and the global oral grade and their noticing capacity, how the systematic practice with VoiceThread, in a hybrid approach, can impact the learners global oral development, their oral production in terms of fluency (number of words per minute), accuracy (number of errors in hundred words), and complexity (number of dependent clauses per minute), and on their noticing capacity (SCHMIDT, 1990; 1995; 2001), that is, the learner s capacity of noticing the gaps existent in their oral production. In order to answer these research questions, 49 L2 learners of English were divided into an experimental group (25 students) and a control group (24 students). The experimental group was exposed to the hybrid approach with VT during two months and, through a pre- and post-test, we verified if this systematic practice would positively influence these participants oral production and noticing capacity. These results were compared to the pre- and post-test scores from the control group, which was not exposed to VT. Finally, learners impressions in relation to the use of this tool were also sought through a questionnaire applied after the post-test. The results indicate that there is a statistically significant correlation between the learners speech production (accuracy and global oral grade) and their noticing capacity. Besides, it was verified a positive impact of VoiceThread on the learners speech production variables and on their noticing capacity. They also reveal a positive reaction by the learners in relation to the hybrid experience with this web tool
Resumo:
Gait speed has been described as a predictive indicator of important adverse outcomes in older populations. Among the criteria to evaluate frailty, gait speed has been identified as the most reliable predictor of fragility, practical and low cost. Objective: This study assesses the discriminating capability of gait speed in determining the presence of fragility in the elderly community in northeast of Brazil. Method: We performed an observational analytic study with a transversal character with a sample of 391 community-living elders, aged 65 years or older, of both sexes, in the city of Santa Cruz-RN. Participants were interviewed using a multidimensional questionnaire to obtain sociodemographic information, physical-related and mental health-related information. The unintentional weight loss, muscle weakness, self-reported exhaustion, slow gait and low-physical activity were considered to evaluate the frailty syndrome. Gait velocity was measured as the time taken to walk the middle 4,6 meters of 8,6 meters (excluding 2 meters to warm-up phase and 2 meters to deceleration phase).We calculate the sensitivity and specificity of gait speed test in different cutoff points for the test run time, from which ROC curve was constructed as a measure of test predictive value to identify frail elders. The prevalence of frailty in Santa Cruz-RN was 17.1%. The gait speed test accuracy was 71%when speed is below 0,91m/s. Among women, the gait speed test accuracy was 80%(gait speed below 0.77m/s) and among men, the test accuracy was 86% (gait spend below 0,82%) (p<0,0001).Conclusion: our findings have clinical relevance when we consider that the detection of frailty presence by the gait speed test can be observed in elderly men and women by a simple, cheap and efficient exam
Resumo:
The Wii Balance Board (WBB) began to be investigated as a low-cost alternative for assessing static balance in vertical posture. However, studies employed methodological procedures that did not eliminate result variability between the tests and equipment used. Objective: Determine the validity and reproducibility of the WBB as an instrument for assessing static balance in the vertical position, using simultaneous data analysis and superimposed equipment. Methods: This is an accuracy study of 29 healthy young individuals of both sexes aged 18 to 30 years. Subjects were assessed 24h apart (test-retest), using unipodal and bipodal support tests, with eyes closed and open. To that end the WBB was placed on top of a force platform (FP) and data (postural sway) were collected simultaneously on both devices. Validity and reproducibility were analyzed using the interclass correlation coefficient (ICC). Finally, Bland-Altman analysis was applied to assess agreement. Results: The sample was composed of 23 women and 6 men, with mean age of 24.2±6.3 years, 60.7±6.3 kg and 1.64±4.2 m. The validity of the WBB compared to the FP was excellent for all 4 tasks proposed (ICC = 0.93 0.98). The reproducibility analyzed by test-retest was excellent for the bipodal support tasks (ICC = 0.93-0.98) and only moderate for the unipodal support tests (ICC = 0.46 0.70). Graphic analysis exhibited good agreement between the devices, since most of the measures were within the limits of agreement. Conclusion: this study proved the validity and reproducibility of the Wii Balance Board as an instrument for assessing static balance in vertical posture, using simultaneous analysis with superimposed equipment. Thus, the WBB has been increasingly used by physical therapists and other health professionals in their clinical practice, as both a rehabilitation and assessment tool
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior