52 resultados para Pentium


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este proyecto se ha desarrollado en el Departamento de Matemática Aplicada y Computación con sede en la Facultad de Ciencias de la Universidad de Valladolid. Están implicados en él tres profesores del citado departamento. Los objetivos del proyecto son el diseño y elaboración de prácticas relativas al descriptor troncal 'Ecuaciones diferenciales ordinarias' de la Licenciatura de Matemáticas que sean interactivas y elaboradas con medios informáticos susceptibles de ser trasladados a la red. El proceso de elaboración ha sido en las propias prácticas de asignaturas de la Licenciatura, concretamente en la asignatura modelos matemáticos I. El trabajo ha sido utilizado de forma real en las prácticas informáticas puede afirmarse que: 1. Favorecen el aprendizaje de los alumnos 2. Disminuyen el fracaso escolar 3. potencian la eficacia de las prácticas. Los materiales elaborados son: guía de uso de alumnos (27 páginas), 3 prácticas interactivas en MATLAB, copia anexa en CD. Para ello se ha usado un ordenador personal Pentium III, 933MHz, 128Mb RAM y el software MATLAB 5.1. No publicado.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eval??a los efectos del entrenamiento en los procesos de memoria, copia y lectura, en el contexto de la instrucci??n asistida por ordenador, con el fin de determinar cu??l influye m??s en mejorar la ortograf??a en ni??os con DA en una lengua transparente. 85 sujetos: 28 ni??as y 57 ni??os de tercero y cuarto curso del segundo ciclo de educaci??n primaria. Se administraron unos subtest que consist??a en el dictado de dos textos. Las normas de valoraci??n de la escritura consist??an en determinar el n??mero total de errores: sustituciones, rotaciones, omisiones, adiciones, inversiones, uniones, fragmentaciones, de acentuaci??n, de puntuaci??n o cambios conson??nticos. Seguidamente los estudiantes fueron asignados en 4 grupos: un grupo de entrenamiento en procesos de copia, un grupo de entrenamiento en procesos de memoria, un grupo de entrenamiento en procesos de lectura y un grupo control que no recibe entrenamiento. Pruebas de escritura, lectura, tareas de conciencia fonol??gica, de habilidades fonol??gicas y de discriminaci??n ortogr??fica. Test de memoria y de inteligencia. Se utilizaron variables: 1) Variable independiente intersujeto, se refiere a las condiciones experimentales: memoria, copia, lectura y control. 2) Variable independiente intrasujeto, se refiere a los par??metros psicoling????sticos: longitud de la palabra, consistencia ortogr??fica y estructura sil??bica. 3) Otras variables: variable dependiente pretest-postest; variable dependiente intra-tratamiento. Se emplearon ordenadores Pentium 150 MHz y un programa llamado TEDIS2. Los estudiantes en copia y memoria aumentaron la habilidad ortogr??fica independientemente de los par??metros psicoling????sticos, pero el entrenamiento en copia caus?? mejores resultados en ni??os con DA. En definitiva, se demuestra que en lenguas con sistemas alfab??tico, independientemente de la profundidad del c??digo, el proceso copia unido a la instrucci??n asistida por ordenador en su modelo de refuerzo y pr??ctica aumenta el rendimiento en las habilidades ortogr??ficas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Responder cómo se procesa el lenguaje, cómo funcionan todos los elementos que intervienen en la comprensión y en qué orden se produce el procesamiento lingüístico. Alumnos de ESO, que no presentan discapacidad auditiva. El grupo experimental lo compone 31 chicos y 12 niñas que presentan dificultades en Lengua, algunos de ellos también tienen problemas de aprendizaje en Matemáticas y Lengua inglesa. Se realizan dos pruebas. La primera trata de comprensión oral. Reciben un cuadernillo cada uno. Disponen de 25 minutos. Los datos personales es lo último que deben escribir. Si no escuchan bien, lo indican en el cuadernillo y así se controla la falta de comprensión por deficiencias de sonido. Se les pone una grabación tres veces. Durante las grabaciones se controlan las diferencias acústicas entre los que están en la primera fila y la última. Los alumnos contestan a las preguntas. A los que presentan problemas con las definiciones se les pide que rellenen la última hoja para comprobar si conocen el significado, no su capacidad de expresión. El segundo cuadernillo lo reciben al acabar todo el grupo. Disponen de tiempo ilimitado. Si no conocen una palabra se les explica el significado. Finalmente se les pasa una prueba de memoria auditiva inmediata . Se pretende controlar la variable 'memoria' y estudiar su incidencia en la prueba. La segunda prueba consiste en originar un modelo de lenguaje utilizando el mismo texto presentado a los estudiantes. También se pretende conocer lo que pasa si se introducen oraciones incompletas para rellenar por los alumnos. La única información que dispone el ordenador es la señal vocal y con ella realiza el modelo de lenguaje. Grabadora mono portátil, cinta de casete, cuadernillo de respuesta de comprensión oral, cuadernillo de respuesta de estrategias de comprensión utilizada, cuaderno de respuestas de estrategias de procedimiento de comprensión, hoja de respuestas para la prueba de memoria, programa SPSS y Excel para análisis de datos. Para la segunda prueba los materiales son: la grabadora mono portátil Panasonic, cinta casete, reconocedor Via Voice 98, Pentium III, tarjeta de sonido, C.M.U. Statistical Language Modeling Tool Kit, Programa tex2wfreq, text2idngram, idngram21m,evallm. Para la primera prueba se confecciona un diseño experimental multivariado; las variables fueron: memoria, comprensión auditiva y estrategias utilizadas para comprender. Las variables contaminadoras: experimentador, material, condiciones acústicas, centro educativo, nivel socioeconómico y edad. Éstas se controlan por igualación. Las variables organísmicas y el sexo se controlan aleatoriamente. La memoria auditiva tuvo que ser controlada a través de un análisis de covarianza. En la segunda prueba, la variable fue la comprensión lingüística oral, para después establecer una comparación. Los resultados de la primera prueba revelan que las correlaciones que se obtienen entre las variables analizadas son independientes y arrojan diferencias entre el grupo experimental y el de control. Se encuentran puntuaciones más altas en los sujetos sin dificultades en memoria y comprensión. No hay diferencias entre los dos grupos en estrategias de comprensión. Los resultados obtenidos en la fase de evaluación de la segunda prueba indican que ninguna respuesta resulta elegida correctamente por lo que no se puede realizar ninguna comparación . Parece que la muestra utiliza el mismo modelo para comprender, todos utilizan las mismas estrategias, las diferencias son cuantitativas y debidas a variables organísmicas, entre ellas, la memoria. La falta de vocabulario es la primera dificultad en el grupo con dificultades, la falta de memoria impide corregir palabras mal pronunciadas, buscar conocimientos previos y relacionar ideas en su memoria a largo plazo. Son también incapaces de encontrar la idea principal. La comprensión es tan lenta que no pueden procesar. Se demuestra que los programas informáticos imitan al hombre a niveles elementales, en Tecnología del Habla se utilizan prioritariamente modelos semánticos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current version of this database on CD-ROM contains information on 14 127 cocoa (Theobroma cacao) clones and their 14 112 synonyms, the origin and history of the clones and the clone names, and accession lists for 48 of the major cocoa gene banks including quarantine stations. Also included are morphological data for leaves, fruits and seeds, disease reactions, quality and agronomic characters, and reference information on common abbreviations and acronyms, cocoa gene bank addresses and a full bibliography (with hyperlinked reference to data). New additions are 748 photographs and drawings of 428 individual clones in 11 different locations. Also included are 376 profiles for 15 simple sequence repeat primer pairs on 331 clones held in the University of Reading Intermediate Cocoa Quarantine Facility. Minimum system requirements are Windows 95 or later, a Pentium 166 with 32 MB RAM, CD-ROM drive and a minimum 20 MB hard disk space. A user guide is included in the package.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The movement of graphics and audio programming towards three dimensions is to better simulate the way we experience our world. In this project I looked to use methods for coming closer to such simulation via realistic graphics and sound combined with a natural interface. I did most of my work on a Dell OptiPlex with an 800 MHz Pentium III processor and an NVIDlA GeForce 256 AGP Plus graphics accelerator -high end products in the consumer market as of April 2000. For graphics, I used OpenGL [1], an open·source, multi-platform set of graphics libraries that is relatively easy to use, coded in C . The basic engine I first put together was a system to place objects in a scene and to navigate around the scene in real time. Once I accomplished this, I was able to investigate specific techniques for making parts of a scene more appealing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho explora a aplicação de técnicas de injeção de falhas, que simulam falhas transientes de hardware, para validar o mecanismo de detecção e de recuperação de erros, medir os tempos de indisponibilidade do banco de dados após a ocorrência de uma falha que tenha provocado um FUDVK. Adicionalmente, avalia e valida a ferramenta de injeção de falhas FIDe, utilizada nos experimentos, através de um conjunto significativo de testes de injeção de falhas no ambiente do SGBD. A plataforma experimental consiste de um computador Intel Pentium 550 MHz com 128 MB RAM, do sistema operacional Linux Conectiva kernel versão 2.2.13. O sistema alvo das injeções de falhas é o SGBD centralizado InterBase versão 4.0. As aplicações para a carga de trabalho foram escritas em VFULSWV SQL e executadas dentro de uma sessão chamada LVTO. Para a injeção de falhas foram utilizadas três técnicas distintas: 1) o comando NLOO do sistema operacional; 2) UHVHW geral no equipamento; 3) a ferramenta de injeção de falhas FIDe, desenvolvida no grupo de injeção de falhas do PPGC da UFRGS. Inicialmente são introduzidos e reforçados os conceitos básicos sobre o tema, que serão utilizados no decorrer do trabalho e são necessários para a compreensão deste estudo. Em seguida é apresentada a ferramenta de injeção de falhas Xception e são também analisados alguns experimentos que utilizam ferramentas de injeção de falhas em bancos de dados. Concluída a revisão bibliográfica é apresentada a ferramenta de injeção de falhas – o FIDe, o modelo de falhas adotado, a forma de abordagem, a plataforma de hardware e software, a metodologia e as técnicas utilizadas, a forma de condução dos experimentos realizados e os resultados obtidos com cada uma das técnicas. No total foram realizados 3625 testes de injeções de falhas. Com a primeira técnica foram realizadas 350 execuções, com a segunda técnica foram realizadas 75 execuções e com a terceira técnica 3200 execuções, em 80 testes diferentes. O modelo de falhas proposto para este trabalho refere-se a falhas de crash baseadas em corrupção de memória e registradores, parada de CPU, aborto de transações ou reset geral. Os experimentos foram divididos em três técnicas distintas, visando a maior cobertura possível de erros, e apresentam resultados bastante diferenciados. Os experimentos com o comando NLOO praticamente não afetaram o ambiente do banco de dados. Pequeno número de injeção de falhas com o FIDe afetaram significativamente a dependabilidade do SGBD e os experimentos com a técnica de UHVHW geral foram os que mais comprometeram a dependabilidade do SGBD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O material apresenta a estrutura geral de um subsistema de Entrada/Saída, seus princípios do tratamento e complexidade. Destaca assuntos como: Hardware de E/S; Estrutura típica do barramento do PC; Comunicação entre CPU e controladoras; Endereços de algumas portas de I/O do PC; E/S programada – Polling; E/S por Interrupções; e Vetor de eventos do Intel Pentium. O material também trata do Acesso Direto à Memória e a operação de transferência por DMA; os dispositivos de Rede; as operações do subsistema de E/S (Escalonamento, Buferização, Caching, Spooling, Reserva de dispositivo); o tratamento de erros e as operações que podem estar sujeitas a falhas; e por fim, tratamento de requisições de E/S e o ciclo de vida de uma requisição de E/S.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a proposal for a voltage and frequency control system for a wind power induction generator. It has been developed na experimental structure composes basically by a three phase induction machine, a three phase capacitor and a reactive static Power compensator controlled by histeresys. lt has been developed control algorithms using conventional methods (Pl control) and linguistic methods (using concepts of logic and fuzzy control), to compare their performances in the variable speed generator system. The control loop was projected using the ADJDA PCL 818 model board into a Pentium 200 MHz compu ter. The induction generator mathematical model was studied throught Park transformation. It has been realized simulations in the Pspice@ software, to verify the system characteristics in transient and steady-state situations. The real time control program was developed in C language, possibilish verify the algorithm performance in the 2,2kW didatic experimental system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Ciência e Tecnologia de Materiais - FC

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a method for DRR generation as well as for volume gradients projection using hardware accelerated 2D texture mapping and accumulation buffering and demonstrates its application in 2D-3D registration of X-ray fluoroscopy to CT images. The robustness of the present registration scheme are guaranteed by taking advantage of a coarse-to-fine processing of the volume/image pyramids based on cubic B-splines. A human cadaveric spine specimen together with its ground truth was used to compare the present scheme with a purely software-based scheme in three aspects: accuracy, speed, and capture ranges. Our experiments revealed an equivalent accuracy and capture ranges but with much shorter registration time with the present scheme. More specifically, the results showed 0.8 mm average target registration error, 55 second average execution time per registration, and 10 mm and 10° capture ranges for the present scheme when tested on a 3.0 GHz Pentium 4 computer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Colofón