880 resultados para Functional Requirements for Authority Data (FRAD)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the last decade the principle of Open Access to publicly funded research has been getting a growing support from policy makers and funders across Europe, both at national level and within the European Union context. At European level some of the first relevant steps taken by the European Research Council (ERC) with a statement supporting Open Access (2006), shortly followed by guidelines for researchers funded by the ERC (2007) stating that all peer-reviewed publications from ERC funded projects should be made openly accessible shortly after their publication. Those guidelines were revised in October 2013, reinforcing the mandatory character of the requirements and expanding them to monographs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The educational process is characterised by multiple outcomes such as the achievement of academic results of various standards and non-academic achievements. This paper shows how data envelopment analysis (DEA) can be used to guide secondary schools to improved performance through role-model identification and target setting in a way which recognises the multi-outcome nature of the education process and reflects the relative desirability of improving individual outcomes. The approach presented in the paper draws from a DEA-based assessment of the schools of a local education authority carried out by the authors. Data from that assessment are used to illustrate the approach presented in the paper. (Key words: Data envelopment analysis, education, target setting.)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation established a software-hardware integrated design for a multisite data repository in pediatric epilepsy. A total of 16 institutions formed a consortium for this web-based application. This innovative fully operational web application allows users to upload and retrieve information through a unique human-computer graphical interface that is remotely accessible to all users of the consortium. A solution based on a Linux platform with My-SQL and Personal Home Page scripts (PHP) has been selected. Research was conducted to evaluate mechanisms to electronically transfer diverse datasets from different hospitals and collect the clinical data in concert with their related functional magnetic resonance imaging (fMRI). What was unique in the approach considered is that all pertinent clinical information about patients is synthesized with input from clinical experts into 4 different forms, which were: Clinical, fMRI scoring, Image information, and Neuropsychological data entry forms. A first contribution of this dissertation was in proposing an integrated processing platform that was site and scanner independent in order to uniformly process the varied fMRI datasets and to generate comparative brain activation patterns. The data collection from the consortium complied with the IRB requirements and provides all the safeguards for security and confidentiality requirements. An 1-MR1-based software library was used to perform data processing and statistical analysis to obtain the brain activation maps. Lateralization Index (LI) of healthy control (HC) subjects in contrast to localization-related epilepsy (LRE) subjects were evaluated. Over 110 activation maps were generated, and their respective LIs were computed yielding the following groups: (a) strong right lateralization: (HC=0%, LRE=18%), (b) right lateralization: (HC=2%, LRE=10%), (c) bilateral: (HC=20%, LRE=15%), (d) left lateralization: (HC=42%, LRE=26%), e) strong left lateralization: (HC=36%, LRE=31%). Moreover, nonlinear-multidimensional decision functions were used to seek an optimal separation between typical and atypical brain activations on the basis of the demographics as well as the extent and intensity of these brain activations. The intent was not to seek the highest output measures given the inherent overlap of the data, but rather to assess which of the many dimensions were critical in the overall assessment of typical and atypical language activations with the freedom to select any number of dimensions and impose any degree of complexity in the nonlinearity of the decision space.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The MAREDAT atlas covers 11 types of plankton, ranging in size from bacteria to jellyfish. Together, these plankton groups determine the health and productivity of the global ocean and play a vital role in the global carbon cycle. Working within a uniform and consistent spatial and depth grid (map) of the global ocean, the researchers compiled thousands and tens of thousands of data points to identify regions of plankton abundance and scarcity as well as areas of data abundance and scarcity. At many of the grid points, the MAREDAT team accomplished the difficult conversion from abundance (numbers of organisms) to biomass (carbon mass of organisms). The MAREDAT atlas provides an unprecedented global data set for ecological and biochemical analysis and modeling as well as a clear mandate for compiling additional existing data and for focusing future data gathering efforts on key groups in key areas of the ocean. This is a gridded data product about diazotrophic organisms . There are 6 variables. Each variable is gridded on a dimension of 360 (longitude) * 180 (latitude) * 33 (depth) * 12 (month). The first group of 3 variables are: (1) number of biomass observations, (2) biomass, and (3) special nifH-gene-based biomass. The second group of 3 variables is same as the first group except that it only grids non-zero data. We have constructed a database on diazotrophic organisms in the global pelagic upper ocean by compiling more than 11,000 direct field measurements including 3 sub-databases: (1) nitrogen fixation rates, (2) cyanobacterial diazotroph abundances from cell counts and (3) cyanobacterial diazotroph abundances from qPCR assays targeting nifH genes. Biomass conversion factors are estimated based on cell sizes to convert abundance data to diazotrophic biomass. Data are assigned to 3 groups including Trichodesmium, unicellular diazotrophic cyanobacteria (group A, B and C when applicable) and heterocystous cyanobacteria (Richelia and Calothrix). Total nitrogen fixation rates and diazotrophic biomass are calculated by summing the values from all the groups. Some of nitrogen fixation rates are whole seawater measurements and are used as total nitrogen fixation rates. Both volumetric and depth-integrated values were reported. Depth-integrated values are also calculated for those vertical profiles with values at 3 or more depths.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Iowa Department of Transportation began preparation for the acquisition of an electronic document management system in 1996. The first phase was development of a strategic plan. The plan provided guidelines for defining the acquisition and implementation of a document management system to automate document handling and distribution. Phase 2 involved developing draft standards (document, indexing and technology) for planning and implementation of a document management system. These standards were to identify existing industry standards and determine which standards would best support the specific requirements of the Iowa Department of Transportation. During development of these standards, the decision was made to enlarge the scope of this effort from a document management system to a records management system (RMS). Phase .3 identified business processes that were to be further developed as pilot projects of a much larger agency-wide records management system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of secondary data in health care research has become a very important issue over the past few years. Data from the treatment context are being used for evaluation of medical data for external quality assurance, as well as to answer medical questions in the form of registers and research databases. Additionally, the establishment of electronic clinical systems like data warehouses provides new opportunities for the secondary use of clinical data. Because health data is among the most sensitive information about an individual, the data must be safeguarded from disclosure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to accurately predict the lifetime of building components is crucial to optimizing building design, material selection and scheduling of required maintenance. This paper discusses a number of possible data mining methods that can be applied to do the lifetime prediction of metallic components and how different sources of service life information could be integrated to form the basis of the lifetime prediction model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To introduce techniques for deriving a map that relates visual field locations to optic nerve head (ONH) sectors and to use the techniques to derive a map relating Medmont perimetric data to data from the Heidelberg Retinal Tomograph. METHODS: Spearman correlation coefficients were calculated relating each visual field location (Medmont M700) to rim area and volume measures for 10 degrees ONH sectors (HRT III software) for 57 participants: 34 with glaucoma, 18 with suspected glaucoma, and 5 with ocular hypertension. Correlations were constrained to be anatomically plausible with a computational model of the axon growth of retinal ganglion cells (Algorithm GROW). GROW generated a map relating field locations to sectors of the ONH. The sector with the maximum statistically significant (P < 0.05) correlation coefficient within 40 degrees of the angle predicted by GROW for each location was computed. Before correlation, both functional and structural data were normalized by either normative data or the fellow eye in each participant. RESULTS: The model of axon growth produced a 24-2 map that is qualitatively similar to existing maps derived from empiric data. When GROW was used in conjunction with normative data, 31% of field locations exhibited a statistically significant relationship. This significance increased to 67% (z-test, z = 4.84; P < 0.001) when both field and rim area data were normalized with the fellow eye. CONCLUSIONS: A computational model of axon growth and normalizing data by the fellow eye can assist in constructing an anatomically plausible map connecting visual field data and sectoral ONH data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vitamin D deficiency and insufficiency are now seen as a contemporary health problem in Australia with possible widespread health effects not limited to bone health1. Despite this, the Vitamin D status (measured as serum 25-hydroxyvitamin D (25(OH)D)) of ambulatory adults has been overlooked in this country. Serum 25(OH)D status is especially important among this group as studies have shown a link between Vitamin D and fall risk in older adults2. Limited data also exists on the contributions of sun exposure via ultraviolet radiation and dietary intake to serum 25(OH)D status in this population. The aims of this project were to assess the serum 25(OH)D status of a group of older ambulatory adults in South East Queensland, to assess the association between their serum 25(OH)D status and functional measures as possible indicators of fall risk, obtain data on the sources of Vitamin D in this population and assess whether this intake was related to serum 25(OH)D status and describe sun protection and exposure behaviors in this group and investigate whether a relationship existed between these and serum 25(OH)D status. The collection of this data assists in addressing key gaps identified in the literature with regard to this population group and their Vitamin D status in Australia. A representative convenience sample of participants (N=47) over 55 years of age was recruited for this cross-sectional, exploratory study which was undertaken in December 2007 in south-east Queensland (Brisbane and Sunshine coast). Participants were required to complete a sun exposure questionnaire in addition to a Calcium and Vitamin D food frequency questionnaire. Timed up and go and handgrip dynamometry tests were used to examine functional capacity. Serum 25(OH)D status and blood measures of Calcium, Phosphorus and Albumin were determined through blood tests. The Mean and Median serum 25-Hydroxyvitamin D (25(OH)D) for all participants in this study was 85.8nmol/L (Standard Deviation 29.7nmol/L) and 81.0nmol/L (Range 22-158nmol/L), respectively. Analysis at the bivariate level revealed a statistically significant relationship between serum 25(OH)D status and location, with participants living on the Sunshine Coast having a mean serum 25(OH)D status 21.3nmol/L higher than participants living in Brisbane (p=0.014). While at the descriptive level there was an apparent trend towards higher outdoor exposure and increasing levels of serum 25(OH)D, no statistically significant associations between the sun measures of outdoor exposure, sun protection behaviors and phenotypic characteristics and serum 25(OH)D status were observed. Intake of both Calcium and Vitamin D was low in this sample with sixty-eight (68%) of participants not meeting the Estimated Average Requirements (EAR) for Calcium (Median=771.0mg; Range=218.0-2616.0mg), while eighty-seven (87%) did not meet the Adequate Intake for Vitamin D (Median=4.46ug; Range=0.13-30.0ug). This raises the question of how realistic meeting the new Adequate Intakes for Vitamin D is, when there is such a low level of Vitamin D fortification in this country. However, participants meeting the Adequate Intake (AI) for Vitamin D were observed to have a significantly higher serum 25(OH)D status compared to those not meeting the AI for Vitamin D (p=0.036), showing that meeting the AI for Vitamin D may play a significant role in determining Vitamin D status in this population. By stratifying our data by categories of outdoor exposure time, a trend was observed between increased importance of Vitamin D dietary intake as a possible determinant of serum 25(OH)D status in participants with lower outdoor exposures. While a trend towards higher Timed Up and Go scores in participants with higher 25(OH) D status was seen, this was only significant for females (p=0.014). Handgrip strength showed statistically significant association with serum 25(OH)D status. The high serum 25(OH)D status in our sample almost certainly explains the limited relationship between functional measures and serum 25(OH)D. However, the observation of an association between slower Time Up and Go speeds, and lower serum 25(OH)D levels, even with a small sample size, is significant as slower Timed Up and Go speeds have been associated with increased fall risk in older adults3. Multivariable regression analysis revealed Location as the only significant determinant of serum 25(OH)D status at p=0.014, with trends (p=>0.1) for higher serum 25(OH)D being shown for participants that met the AI for Vitamin D and rated themselves as having a higher health status. The results of this exploratory study show that 93.6% of participants had adequate 25(OH)D status-possibly due to measurement being taken in the summer season and the convenience nature of the sample. However, many participants do not meet their dietary Calcium and Vitamin D requirements, which may indicate inadequate intake of these nutrients in older Australians and a higher risk of osteoporosis. The relationship between serum 25(OH)D and functional measures in this population also requires further study, especially in older adults displaying Vitamin D insufficiency or deficiency.