871 resultados para Language Acquisition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

ADMB2R is a collection of AD Model Builder routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 ADMB2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the ADMB2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer ADMB2R to others in the hope that they will find it useful. (PDF contains 30 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

C2R is a collection of C routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 C2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the C2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer C2R to others in the hope that they will find it useful. (PDF contains 27 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For2R is a collection of Fortran routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 For2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the For2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer For2R to others in the hope that they will find it useful. (PDF contains 31 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As part of an LSIS Regional Response Fund project, Essex Adult Community Learning (ACL) has created a toolkit. The toolkit provides training for foreign language tutors in producing digital resources which combine audio, video, text and communication activities. The toolkit which is now an integral part of a blended learning language course, has also developed tutors' skills in using technology for teaching and learning. The main aim has also been to provide an alternative and flexible method of delivery, especially where funding cuts have impacted on the cost of running taught courses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eguíluz, Federico; Merino, Raquel; Olsen, Vickie; Pajares, Eterio; Santamaría, José Miguel (eds.)

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decades big improvements have been done in the field of computer aided learning, based on improvements done in computer science and computer systems. Although the field has been always a bit lagged, without using the latest solutions, it has constantly gone forward taking profit of the innovations as they show up. As long as the train of the computer science does not stop (and it won’t at least in the near future) the systems that take profit of those improvements will not either, because we humans will always need to study; Sometimes for pleasure and some other many times out of need. Not all the attempts in the field of computer aided learning have been in the same direction. Most of them address one or some few of the problems that show while studying and don’t take into account solutions proposed for some other problems. The reasons for this can be varied. Sometimes the solutions simply are not compatible. Some other times, because the project is an investigation it’s interesting to isolate the problem. And, in commercial products, licenses and patents often prevent the new projects to use previous work. The world moved forward and this is an attempt to use some of the options offered by technology, mixing some old ideas with new ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In April 2005, a SHOALS 1000T LIDAR system was used as an efficient alternative for safely acquiring data to describe the existing conditions of nearshore bathymetry and the intertidal zone over an approximately 40.7 km2 (11.8 nm2) portion of hazardous coastline within the Olympic Coast National Marine Sanctuary (OCNMS). Data were logged from 1,593 km (860 nm) of track lines in just over 21 hours of flight time. Several islands and offshore rocks were also surveyed, and over 24,000 geo-referenced digital still photos were captured to assist with data cleaning and QA/QC. The 1 kHz bathymetry laser obtained a maximum water depth of 22.2 meters. Floating kelp beds, breaking surf lines and turbid water were all challenges to the survey. Although sea state was favorable for this time of the year, recent heavy rainfall and a persistent low-lying layer of fog reduced acquisition productivity. The existence of a completed VDatum model covering this same geographic region permitted the LIDAR data to be vertically transformed and merged with existing shallow water multibeam data and referenced to the mean lower low water (MLLW) tidal datum. Analysis of a multibeam bathymetry-LIDAR difference surface containing over 44,000 samples indicated surface deviations from –24.3 to 8.48 meters, with a mean difference of –0.967 meters, and standard deviation of 1.762 meters. Errors in data cleaning and false detections due to interference from surf, kelp, and turbidity likely account for the larger surface separations, while the remaining general surface difference trend could partially be attributed to a more dense data set, and shoal-biased cleaning, binning and gridding associated with the multibeam data for maintaining conservative least depths important for charting dangers to navigation. (PDF contains 27 pages.)