956 resultados para Cluster Counting Algorithm
Resumo:
Resumen basado en el de la publicación
Resumo:
Es una colección de poemas para niños de tres a seis años. Las rimas son al mismo tiempo entretenidas y educativas, y les ponen en contacto con la aritmética, la lectura y escritura.
Resumo:
Tom no puede dormir aunque está rodeado de sus juguetes de peluche, por lo que su padre le sugiere que intente contar ovejas. Comienza a contar , pero la séptima oveja, delgada y ágil, desaparece por el armario del dormitorio antes de que el niño pestañee. El niño intenta contar otras criaturas y es sorprendido por lobos, pitones, cabras monteses, piratas, pingüinos, vampiros, fantasmas, y tigres hasta que, afortunadamente, cierra la puerta apaga la luz, y puede dormir. El cuento es acumulativo en verso. En doble página aparecen las criaturas en una variedad de tamaños, ángulos y formas, dando la sensación de movimiento. La página de la derecha es un recuadro de texto y conjuntos de siluetas. Para leer en voz alta, y para contar hasta cien. El texto es ideal para llamar la atención sobre la gran variedad de opciones de palabras y ortografía.
Resumo:
Resumen tomado de la publicación
Resumo:
Resumen basado en el de la publicación
Resumo:
Diffusion tensor magnetic resonance imaging, which measures directional information of water diffusion in the brain, has emerged as a powerful tool for human brain studies. In this paper, we introduce a new Monte Carlo-based fiber tracking approach to estimate brain connectivity. One of the main characteristics of this approach is that all parameters of the algorithm are automatically determined at each point using the entropy of the eigenvalues of the diffusion tensor. Experimental results show the good performance of the proposed approach
Resumo:
This paper discusses the auditory brainstem response (ABR) testing for infants.
Resumo:
This paper describes the results of an investigation which examined the efficacy of a feedback equalization algorithm incorporated into the Central Institute for the Deaf Wearable Digital Hearing Aid. The study examined whether the feedback equalization would allow for greater usable gains when subjects listened to soft speech signals, and if so, whether or not this would improve speech intelligibility.
Resumo:
El comercio mundial tiene múltiples actores que está sumamente bien posicionados y otros que buscan nuevas estrategias para mejorar su posicionamiento. Asimismo, las diferencias entre los mercados internacionales, nacionales y/o locales son notables, mientras los unos se expanden a pasos agigantados, los otros lo hacen paso a paso. Por lo tanto, el presente trabajo tiene como objetivo fundamental determinar las estrategias que las empresas del sector de las confecciones del cluster textil de Atuntaqui pueden implementar para hacer frente a la competencia internacional. Con esta referencia, el trabajo se ha dividido en tres capítulos. El primer capítulo, está conformado por un análisis de los principales elementos conceptuales (cadena global de valor, managment dentro de la CGV y cluster industriales); seguido, se presenta un breve recuento de la industria, el comercio mundial y los principales importadores y exportadores. Luego, se analiza las tendencias globales utilizadas por los países de América Latina y el Caribe, entre los que están: plataforma de exportación, clusters y logística internacional. En el segundo capítulo, se analizan las principales estadísticas sobre el comportamiento histórico de las importaciones y exportaciones, la caracterización de las empresas, el ámbito tecnológico, los costos y gastos en los que se ha incurrido en algunos cantones y sobre el mercado laboral. Para finalizar, el tercer capítulo, cuenta con una perspectiva local de la industria en Atuntaqui, con temas, como: el “cluster” de la industria, la caracterización de las empresas, el clima de negocios, el diagnostico FODA y las principales estrategias.
Resumo:
An improved algorithm for the generation of gridded window brightness temperatures is presented. The primary data source is the International Satellite Cloud Climatology Project, level B3 data, covering the period from July 1983 to the present. The algorithm rakes window brightness, temperatures from multiple satellites, both geostationary and polar orbiting, which have already been navigated and normalized radiometrically to the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer, and generates 3-hourly global images on a 0.5 degrees by 0.5 degrees latitude-longitude grid. The gridding uses a hierarchical scheme based on spherical kernel estimators. As part of the gridding procedure, the geostationary data are corrected for limb effects using a simple empirical correction to the radiances, from which the corrected temperatures are computed. This is in addition to the application of satellite zenith angle weighting to downweight limb pixels in preference to nearer-nadir pixels. The polar orbiter data are windowed on the target time with temporal weighting to account for the noncontemporaneous nature of the data. Large regions of missing data are interpolated from adjacent processed images using a form of motion compensated interpolation based on the estimation of motion vectors using an hierarchical block matching scheme. Examples are shown of the various stages in the process. Also shown are examples of the usefulness of this type of data in GCM validation.
Resumo:
Modern methods of spawning new technological motifs are not appropriate when it is desired to realize artificial life as an actual real world entity unto itself (Pattee 1995; Brooks 2006; Chalmers 1995). Many fundamental aspects of such a machine are absent in common methods, which generally lack methodologies of construction. In this paper we mix classical and modern studies in order to attempt to realize an artificial life form from first principles. A model of an algorithm is introduced, its methodology of construction is presented, and the fundamental source from which it sprang is discussed.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.