180 resultados para Latter lanthanides and yttrium


Relevância:

30.00% 30.00%

Publicador:

Resumo:

If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population size is crucial when estimating population-normalized drug consumption (PNDC) from wastewater-based drug epidemiology (WBDE). Three conceptually different population estimates can be used: de jure (common census, residence), de facto (all persons within a sewer catchment), and chemical loads (contributors to the sampled wastewater). De facto and chemical loads will be the same where all households contribute to a central sewer system without wastewater loss. This study explored the feasibility of determining a de facto population and its effect on estimating PNDC in an urban community over an extended period. Drugs and other chemicals were analyzed in 311 daily composite wastewater samples. The daily estimated de facto population (using chemical loads) was on average 32% higher than the de jure population. Consequently, using the latter would systemically overestimate PNDC by 22%. However, the relative day-to-day pattern of drug consumption was similar regardless of the type of normalization as daily illicit drug loads appeared to vary substantially more than the population. Using chemical loads population, we objectively quantified the total methodological uncertainty of PNDC and reduced it by a factor of 2. Our study illustrated the potential benefits of using chemical loads population for obtaining more robust PNDC data in WBDE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To compare measurements of sleeping metabolic rate (SMR) in infancy with predicted basal metabolic rate (BMR) estimated by the equations of Schofield. Methods: Some 104 serial measurements of SMR by indirect calorimetry were performed in 43 healthy infants at 1.5, 3, 6, 9 and 12 months of age. Predicted BMR was calculated using the weight only (BMR-wo) and weight and height (BMR-wh) equations of Schofield for 0-3-y-olds. Measured SMR values were compared with both predictive values by means of the Bland-Altman statistical test. Results: The mean measured SMR was 1.48 MJ/day. The mean predicted BMR values were 1.66 and 1.47 MJ/day for the weight only and weight and height equations, respectively. The Bland-Altman analysis showed that BMR-wo equation on average overestimated SMR by 0.18 MJ/day (11%) and the BMR-wh equation underestimated SMR by 0.01 MJ/day (1%). However the 95% limits of agreement were wide: -0.64 to + 0.28 MJ/day (28%) for the former equation and -0.39 to + 0.41 MJ/day (27%) for the latter equation. Moreover there was a significant correlation between the mean of the measured and predicted metabolic rate and the difference between them. Conclusions: The wide variation seen in the difference between measured and predicted metabolic rate and the bias probably with age indicates there is a need to measure actual metabolic rate for individual clinical care in this age group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is the second in a two-part series that maps continuities and ruptures in conceptions of power and traces their effects in educational discourse on 'the child'. It delineates two post-Newtonian intellectual trajectories through which concepts of 'power' arrived at the theorization of 'the child': the paradoxical bio-physical inscriptions of human-ness that accompanied mechanistic worldviews and the explanations for social motion in political philosophy. The intersection of pedagogical theories with 'the child' and 'power' is further traced from the latter 1800s to the present, where a Foucaultian analytics of power-as-effects is reconsidered in regard to histories of motion. The analysis culminates in an examination of post-Newtonian (dis)continuities in the theorization of power, suggesting some productive paradoxes that inhabit turn of the 21st-century conceptualizations of the social.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jean-Jacques Rousseau’s Émile, ou de I’Education (Émile, or on Education) has been described by Rousseau scholars in latter twentieth century English-language philosophy as an educational classic. In 1995 Robert Wokler argued that together with Montesquieu, Hume, Smith, and Kant among his contemporaries, Rousseau had exerted the most profound influence on modern European intellectual history, “perhaps even surpassing anyone else of his day." For Wokler Émile is “the most significant work on education after Plato’s Republic.” Earlier in 1977, Allan Bloom questioned why Émile had not been the subject of analysis in philosophy relative to the rest of Rousseau‘s work, for “Émile is truly a great book, one that lays out for the first time and with the greatest clarity and vitality the modern way of posing the problems of psychology.” Bloom also saw Émile as “one of those rare total or synoptic books... a book comparable to Plato’s Republic, which it is meant to rival or supersede” and argued that Rousseau himself was at the source of a new tradition: “Whatever else Rousseau may have accomplished, he presented alternatives available to man more comprehensively and profoundly and articulated them in the form which has dominated discussion since his time." Even Peter Gay’s earlier commentary on John Locke and education in 1964 could not escape this central positioning of the text. The significance of Locke’s Some Thoughts on Education is weighed in relation to its impact on Rousseau‘s Émile. For Gay, the latter is “probably the most influential revolutionary tract on education that we have.”

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new intellectual epoch has generated new enterprises to suit changed beliefs and circumstances. A widespread sentiment in both formal historiography and curriculum studies reduces the “new” to the question of how knowledge is recognized as such, how it is gained, and how it is represented in narrative form. Whether the nature of history and conceptions of knowledge are, or ought to be, central considerations in curriculum studies and reducible to purposes or elevated as present orientated requires rethinking. This paper operates as an incitement to discourse that disrupts the protection and isolation of primary categories in the field whose troubling is overdue. In particular, the paper moves through several layers that highlight the lack of settlement regarding the endowment of objects for study with the status of the scientific. It traces how some “invisible” things have been included within the purview of curriculum history as objects of study and not others. The focus is the making of things deemed invisible into scientific objects (or not) and the specific site of analysis is the work of William James (1842-1910). James studied intensely both child mind and the ghost, the former of which becomes scientized and legitimated for further study, the latter abjected. This contrast opens key points for reconsideration regarding conditions of proof, validation criteria, and subject matters and points to opportunities to challenge some well-rehearsed foreclosures within progressive politics and education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results of a study designed to investigate the possibility of using the Si(111)- Ge(5×5) surface reconstruction as a template for In cluster growth are described. As with Si(111)-7×7, the In adatoms preferentially adsorb in the faulted half-unit cell, but on Si(111)- Ge(5×5) a richer variety of cluster geometries are found. In addition to the clusters that occupy the faulted half-unit cell, clusters that span two and four half-unit cells are found. The latter have a triangular shape spanning one unfaulted and three, nearest neighbor, faulted half-unit cells, Triangular clusters in the opposite orientation were not found. Many of the faulted halfunit cells have a streaked appearance consistent with adatom mobility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An investigation to characterize the causes of Pinna nobilis population structure in Moraira bay (Western Mediterranean) was developed. Individuals of two areas of the same Posidonia meadow, located at different depths (A1, -13 and A2, -6 m), were inventoried, tagged, their positions accurately recorded and monitored from July 1997 to July 2002. On each area, different aspects of population demography were studied (i.e. spatial distribution, size structure, displacement evidences, mortality, growth and shell orientation). A comparison between both groups of individuals was carried out, finding important differences between them. In A1, the individuals were more aggregated and mean and maximum size were higher (A1, 10.3 and A2, 6 individuals/100 m(2); A1, x = 47.2 +/- 9.9; A2, x = 29.8 +/- 7.4 cm, P < 0.001, respectively). In A2, growth rate and mortality were higher, the latter concentrated on the largest individuals, in contrast to A1, where the smallest individuals had the higher mortality rate [A1, L = 56.03(1 - e(-0.17t)); A2, L = 37.59(1 - e(-0.40t)), P < 0.001; mean annual mortality A1: 32 dead individuals out of 135, 23.7% and A2: 16 dead individuals out of 36, 44.4%, and total mortality coefficients (z), z(A1(-30)) = 0.28, z(A1(31-45)) = 0.05, z(A1(46-)) = 0.08; z(A2(-30)) = 0.15, z(A2(31-45)) = 0.25]. A common shell orientation N-S, coincident with the maximum shore exposure, was observed in A2. Spatial distribution in both areas showed not enough evidence to discard a random distribution of the individuals, despite the greater aggregation on the deeper area (A1) (A1, chi(2) = 0.41, df = 3, P > 0.5, A2, chi(2)= 0.98, df = 2 and 0.3 < P < 0.5). The obtained results have demonstrated that the depth-related size segregation usually shown by P. nobilis is mainly caused by differences in mortality and growth among individuals located at different depths, rather than by the active displacement of individuals previously reported in the literature. Furthermore, dwarf individuals are observed in shallower levels and as a consequence, the relationship between size and age are not comparable even among groups of individuals inhabiting the same meadow at different depths. The final causes of the differences on mortality and growth are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The possibility of commercially exploiting plant, animal and human genetic resources unlocked by biotechnology has given rise to a wide range of cultural, environmental, ethical and economic conflicts. While supporters describe this activity as bioprospecting, critics refer to it as biopiracy. According to this latter view, international legal agreements and treaties have disregarded opposition and legalized the possibility of appropriating genetic resources and their derivative products through the use of patents. The legal framework that permits the appropriation of natural genetic products in Colombia also criminalizes aspects of traditional ways of life and enables a legally approved but socially harmful land-grabbing process. The article describes these processes and impact in terms of the inversion of justice and the erosion of environmental sustainability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public-Private Partnerships (PPP) are established globally as an important mode of procurement and the features of PPP, not least of which the transfer of risk, appeal to governments and particularly in the current economic climate. There are many other advantages of PPP that are claimed as outweighing the costs of PPP and affording Value for Money (VfM) relative to traditionally financed projects or non-PPP. That said, it is the case that we lack comparative whole-life empirical studies of VfM in PPP and non-PPP. Whilst we await this kind of study, the pace and trajectory of PPP seem set to continue and so in the meantime, the virtues of seeking to improve PPP appear incontrovertible. The decision about which projects, or parts of projects, to offer to the market as a PPP and the decision concerning the allocation or sharing risks as part of engagement of the PPP consortium are among the most fundamental decisions that determine whether PPP deliver VfM. The focus in the paper is on latter decision concerning governments’ attitudes towards risk and more specifically, the effect of this decision on the nature of the emergent PPP consortium, or PPP model, including its economic behavior and outcomes. This paper presents an exploration into the extent to which the seemingly incompatible alternatives of risk allocation and risk sharing, represented by the orthodox/conventional PPP model and the heterodox/alliance PPP model respectively, can be reconciled along with suggestions for new research directions to inform this reconciliation. In so doing, an important step is taken towards charting a path by which governments can harness the relative strengths of both kinds of PPP model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the problem of selecting users in an online social network for targeted advertising so as to maximize the adoption of a given product. In previous work, two families of models have been considered to address this problem: direct targeting and network-based targeting. The former approach targets users with the highest propensity to adopt the product, while the latter approach targets users with the highest influence potential – that is users whose adoption is most likely to be followed by subsequent adoptions by peers. This paper proposes a hybrid approach that combines a notion of propensity and a notion of influence into a single utility function. We show that targeting a fixed number of high-utility users results in more adoptions than targeting either highly influential users or users with high propensity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The detection and replication of schizophrenia risk loci can require substantial sample sizes, which has prompted various collaborative efforts for combining multiple samples. However, pooled samples may comprise sub-samples with substantial population genetic differences, including allele frequency differences. We investigated the impact of population differences via linkage reanalysis of Molecular Genetics of Schizophrenia 1 (MGS1) affected sibling-pair data, comprising two samples of distinct ancestral origin: European (EA: 263 pedigrees) and African-American (AA: 146 pedigrees). To exploit the linkage information contained within these distinct continental samples, we performed separate analyses of the individual samples, allowing for within-sample locus heterogeneity, and the pooled sample, allowing for both within-sample and between-sample heterogeneity. Significance levels, corrected for the multiple tests, were determined empirically. For all suggestive peaks, stronger linkage evidence was obtained in either the EA or AA sample than the combined sample, regardless of how heterogeneity was modeled for the latter. Notably, we report genomewide significant linkage of schizophrenia to 8p23.3 and evidence for a second, independent susceptibility locus, reaching suggestive linkage, 29 cM away on 8p21.3. We also detected suggestive linkage on chromosomes 5p13.3 and 7q36.2. Many regions showed pronounced differences in the extent of linkage between the EA and AA samples. This reanalysis highlights the potential impact of population differences upon linkage evidence in pooled data and demonstrates a useful approach for the analysis of samples drawn from distinct continental groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To determine the distribution of peripheral refraction, including astigmatism, in 7- and 14-year-old Chinese children. Methods: 2134 7-year-old and 1780 14-year-old children were measured with cycloplegic central and horizontal peripheral refraction (15° and 30° at temporal and nasal visual fields). Results: 7- and 14-year-old children included 9 and 594, respectively, with moderate and high myopia (≤−3.0 D), 259 and 831 with low myopia (−2.99 to −0.5 D), 1207 and 305 with emmetropia (−0.49 to +1.0 D), and 659 and 50 with hyperopia (>1.0 D), respectively. Myopic children had relative peripheral hyperopia while hyperopic and emmetropic children had relative peripheral myopia, with greater changes in relative peripheral refraction occurring in the nasal than the temporal visual field. The older group had the greater relative peripheral hyperopia and higher peripheral J180. Both age groups showed positive slopes of J45 across the visual field, with greater slopes in the older group. Conclusions: Myopic children in mainland China have relative peripheral hyperopia while hyperopic and emmetropic children have relative peripheral myopia. Significant differences exist between 7- and 14-year-old children, with the latter showing more relative peripheral hyperopia, greater rate of change in J45 across the visual field, and higher peripheral J180.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Excessive speed is a primary contributing factor to young novice road trauma, including intentional and unintentional speeds above posted limits or too fast for conditions. The objective of this research was to conduct a systematic review of recent investigations into novice drivers’ speed selection, with particular attention to applications and limitations of theory and methodology. Method Systematic searches of peer-reviewed and grey literature were conducted during September 2014. Abstract reviews identified 71 references potentially meeting selection criteria of investigations since the year 2000 into factors that influence (directly or indirectly) actual speed (i.e., behaviour or performance) of young (age <25 years) and/or novice (recently-licensed) drivers. Results Full paper reviews resulted in 30 final references: 15 focused on intentional speeding and 15 on broader speed selection investigations. Both sets identified a range of individual (e.g., beliefs, personality) and social (e.g., peer, adult) influences, were predominantly theory-driven and applied cross-sectional designs. Intentional speed investigations largely utilised self-reports while other investigations more often included actual driving (simulated or ‘real world’). The latter also identified cognitive workload and external environment influences, as well as targeted interventions. Discussion and implications Applications of theory have shifted the novice speed-related literature beyond a simplistic focus on intentional speeding as human error. The potential to develop a ‘grand theory’ of intentional speeding emerged and to fill gaps to understand broader speed selection influences. This includes need for future investigations of vehicle-related and physical environment-related influences and methodologies that move beyond cross-sectional designs and rely less on self-reports.