973 resultados para Primary data
Resumo:
The phylogenetics of Sternbergia (Amaryllidaceae) were studied using DNA sequences of the plastid ndhF and matK genes and nuclear internal transcribed spacer (ITS) ribosomal region for 38, 37 and 32 ingroup and outgroup accessions, respectively. All members of Sternbergia were represented by at least one accession, except S. minoica and S. schubertii, with additional taxa from Narcissus and Pancratium serving as principal outgroups. Sternbergia was resolved and supported as sister to Narcissus and composed of two primary subclades: S. colchiciflora sister to S. vernalis, S. candida and S. clusiana, with this clade in turn sister to S. lutea and its allies in both Bayesian and bootstrap analyses. A clear relationship between the two vernal flowering members of the genus was recovered, supporting the hypothesis of a single origin of vernal flowering in Sternbergia. However, in the S. lutea complex, the DNA markers examined did not offer sufficient resolving power to separate taxa, providing some support for the idea that S. sicula and S. greuteriana are conspecific with S. lutea
Resumo:
This article reports on a two-year research project investigating attitudes to reading held by teachers and pupils in a sample of English primary schools. The project draws on international and national surveys of reading engagement and the findings of previous research, but seeks to provide more detailed data relating to the attitudes of individual children and the strategies used by individual schools and teachers whose pupils demonstrate positive attitudes to reading. Written questionnaires for teachers and pupils and oral interviews with teachers are used, generating both quantitative and qualitative data. Results are related to previous research literature in this area which shows a link between reading motivation and attainment, and to motivational theory. In conclusion, it is argued that teaching strategies which promote positive attitudes to reading need to be used alongside the teaching of reading skills in any effort to raise attainment.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
Cladistic analyses begin with an assessment of variation for a group of organisms and the subsequent representation of that variation as a data matrix. The step of converting observed organismal variation into a data matrix has been considered subjective, contentious, under-investigated, imprecise, unquantifiable, intuitive, as a black-box, and at the same time as ultimately the most influential phase of any cladistic analysis (Pimentel and Riggins, 1987; Bryant, 1989; Pogue and Mickevich, 1990; de Pinna, 1991; Stevens, 1991; Bateman et al., 1992; Smith, 1994; Pleijel, 1995; Wilkinson, 1995; Patterson and Johnson, 1997). Despite the concerns of these authors, primary homology assessment is often perceived as reproducible. In a recent paper, Hawkins et al. (1997) reiterated two points made by a number of these authors: that different interpretations of characters and coding are possible and that different workers will perceive and define characters in different ways. One reviewer challenged us: did we really think that two people working on the same group would come up with different data sets? The conflicting views regarding the reproducibility of the cladistic character matrix provoke a number of questions. Do the majority of workers consistently follow the same guidelines? Has the theoretical framework informing primary homology assessment been adequately explored? The objective of this study is to classify approaches to primary homology assessment, and to quantify the extent to which different approaches are found in the literature by examining variation in the way characters are defined and coded in a data matrix.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
This article presents findings of a larger single-country comparative study which set out to better understand primary school teachers’ mathematics education-related beliefs in Thailand. By combining the interview and observation data collected in the initial stage of this study with data gathered from the relevant literature, the 8-belief / 22-item ‘Thai Teachers’ Mathematics Education-related Beliefs’ (TTMEB) Scale was developed. The results of the Mann-Whitney U Test showed that Thai teachers in the two examined socio-economic regions espouse statistically different beliefs concerning the source and stability of mathematical knowledge, as well as classroom authority. Further, these three beliefs are found to be significantly and positively correlated.
Resumo:
The role of different sky conditions on diffuse PAR fraction (ϕ), air temperature (Ta), vapor pressure deficit (vpd) and GPP in a deciduous forest is investigated using eddy covariance observations of CO2 fluxes and radiometer and ceilometer observations of sky and PAR conditions on hourly and growing season timescales. Maximum GPP response occurred under moderate to high PAR and ϕ and low vpd. Light response models using a rectangular hyperbola showed a positive linear relation between ϕ and effective quantum efficiency (α = 0.023ϕ + 0.012, r2 = 0.994). Since PAR and ϕ are negatively correlated, there is a tradeoff between the greater use efficiency of diffuse light and lower vpd and the associated decrease in total PAR available for photosynthesis. To a lesser extent, light response was also modified by vpd and Ta. The net effect of these and their relation with sky conditions helped enhance light response under sky conditions that produced higher ϕ. Six sky conditions were classified from cloud frequency and ϕ data: optically thick clouds, optically thin clouds, mixed sky (partial clouds within hour), high, medium and low optical aerosol. The frequency and light responses of each sky condition for the growing season were used to predict the role of changing sky conditions on annual GPP. The net effect of increasing frequency of thick clouds is to decrease GPP, changing low aerosol conditions has negligible effect. Increases in the other sky conditions all lead to gains in GPP. Sky conditions that enhance intermediate levels of ϕ, such as thin or scattered clouds or higher aerosol concentrations from volcanic eruptions or anthropogenic emissions, will have a positive outcome on annual GPP, while an increase in cloud cover will have a negative impact. Due to the ϕ/PAR tradeoff and since GPP response to changes in individual sky conditions differ in sign and magnitude, the net response of ecosystem GPP to future sky conditions is non-linear and tends toward moderation of change.
Resumo:
Large changes in the extent of northern subtropical arid regions during the Holocene are attributed to orbitally forced variations in monsoon strength and have been implicated in the regulation of atmospheric trace gas concentrations on millenial timescales. Models that omit biogeophysical feedback, however, are unable to account for the full magnitude of African monsoon amplification and extension during the early to middle Holocene (˜9500–5000 years B.P.). A data set describing land-surface conditions 6000 years B.P. on a 1° × 1° grid across northern Africa and the Arabian Peninsula has been prepared from published maps and other sources of palaeoenvironmental data, with the primary aim of providing a realistic lower boundary condition for atmospheric general circulation model experiments similar to those performed in the Palaeoclimate Modelling Intercomparison Project. The data set includes information on the percentage of each grid cell occupied by specific vegetation types (steppe, savanna, xerophytic woods/scrub, tropical deciduous forest, and tropical montane evergreen forest), open water (lakes), and wetlands, plus information on the flow direction of major drainage channels for use in large-scale palaeohydrological modeling.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow-up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods.
Resumo:
The school has been identified as a key setting to promote physical activity. The purpose of this study was to evaluate the effect of a classroom-based activity break on in-school step counts of primary school children. Data for 90 children (49 boys, 41 girls, 9.3 ± 1.4 years) from three Irish primary schools is presented. In each school one class was randomly assigned as the intervention group and another as controls. Children's step counts were measured for five consecutive days during school hours at baseline and follow-up. Teachers of the intervention classes led a 10 min activity break in the classroom each day (Bizzy Break!). Mean daily in-school steps for the intervention at baseline and follow-up were 5351 and 5054. Corresponding values for the control group were 5469 and 4246. There was a significant difference in the change in daily steps from baseline to follow-up between groups (p < .05). There was no evidence that girls and boys responded differently to the intervention (p > .05). Children participating in a daily 10 min classroom-based activity break undertake more physical activity during school hours than controls.
Resumo:
Land cover plays a key role in global to regional monitoring and modeling because it affects and is being affected by climate change and thus became one of the essential variables for climate change studies. National and international organizations require timely and accurate land cover information for reporting and management actions. The North American Land Change Monitoring System (NALCMS) is an international cooperation of organizations and entities of Canada, the United States, and Mexico to map land cover change of North America's changing environment. This paper presents the methodology to derive the land cover map of Mexico for the year 2005 which was integrated in the NALCMS continental map. Based on a time series of 250 m Moderate Resolution Imaging Spectroradiometer (MODIS) data and an extensive sample data base the complexity of the Mexican landscape required a specific approach to reflect land cover heterogeneity. To estimate the proportion of each land cover class for every pixel several decision tree classifications were combined to obtain class membership maps which were finally converted to a discrete map accompanied by a confidence estimate. The map yielded an overall accuracy of 82.5% (Kappa of 0.79) for pixels with at least 50% map confidence (71.3% of the data). An additional assessment with 780 randomly stratified samples and primary and alternative calls in the reference data to account for ambiguity indicated 83.4% overall accuracy (Kappa of 0.80). A high agreement of 83.6% for all pixels and 92.6% for pixels with a map confidence of more than 50% was found for the comparison between the land cover maps of 2005 and 2006. Further wall-to-wall comparisons to related land cover maps resulted in 56.6% agreement with the MODIS land cover product and a congruence of 49.5 with Globcover.
Resumo:
Highly heterogeneous mountain snow distributions strongly affect soil moisture patterns; local ecology; and, ultimately, the timing, magnitude, and chemistry of stream runoff. Capturing these vital heterogeneities in a physically based distributed snow model requires appropriately scaled model structures. This work looks at how model scale—particularly the resolutions at which the forcing processes are represented—affects simulated snow distributions and melt. The research area is in the Reynolds Creek Experimental Watershed in southwestern Idaho. In this region, where there is a negative correlation between snow accumulation and melt rates, overall scale degradation pushed simulated melt to earlier in the season. The processes mainly responsible for snow distribution heterogeneity in this region—wind speed, wind-affected snow accumulations, thermal radiation, and solar radiation—were also independently rescaled to test process-specific spatiotemporal sensitivities. It was found that in order to accurately simulate snowmelt in this catchment, the snow cover needed to be resolved to 100 m. Wind and wind-affected precipitation—the primary influence on snow distribution—required similar resolution. Thermal radiation scaled with the vegetation structure (~100 m), while solar radiation was adequately modeled with 100–250-m resolution. Spatiotemporal sensitivities to model scale were found that allowed for further reductions in computational costs through the winter months with limited losses in accuracy. It was also shown that these modeling-based scale breaks could be associated with physiographic and vegetation structures to aid a priori modeling decisions.