310 resultados para estimate


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The depth of focus (DOF) can be defined as the variation in image distance of a lens or an optical system which can be tolerated without incurring an objectionable lack of sharpness of focus. The DOF of the human eye serves a mechanism of blur tolerance. As long as the target image remains within the depth of focus in the image space, the eye will still perceive the image as being clear. A large DOF is especially important for presbyopic patients with partial or complete loss of accommodation (presbyopia), since this helps them to obtain an acceptable retinal image when viewing a target moving through a range of near to intermediate distances. The aim of this research was to investigate the DOF of the human eye and its association with the natural wavefront aberrations, and how higher order aberrations (HOAs) can be used to expand the DOF, in particular by inducing spherical aberrations ( 0 4 Z and 0 6 Z ). The depth of focus of the human eye can be measured using a variety of subjective and objective methods. Subjective measurements based on a Badal optical system have been widely adopted, through which the retinal image size can be kept constant. In such measurements, the subject.s tested eye is normally cyclopleged. Objective methods without the need of cycloplegia are also used, where the eye.s accommodative response is continuously monitored. Generally, the DOF measured by subjective methods are slightly larger than those measured objectively. In recent years, methods have also been developed to estimate DOF from retinal image quality metrics (IQMs) derived from the ocular wavefront aberrations. In such methods, the DOF is defined as the range of defocus error that degrades the retinal image quality calculated from the IQMs to a certain level of the possible maximum value. In this study, the effect of different amounts of HOAs on the DOF was theoretically evaluated by modelling and comparing the DOF of subjects from four different clinical groups, including young emmetropes (20 subjects), young myopes (19 subjects), presbyopes (32 subjects) and keratoconics (35 subjects). A novel IQM-based through-focus algorithm was developed to theoretically predict the DOF of subjects with their natural HOAs. Additional primary spherical aberration ( 0 4 Z ) was also induced in the wavefronts of myopes and presbyopes to simulate the effect of myopic refractive correction (e.g. LASIK) and presbyopic correction (e.g. progressive power IOL) on the subject.s DOF. Larger amounts of HOAs were found to lead to greater values of predicted DOF. The introduction of primary spherical aberration was found to provide moderate increase of DOF while slightly deteriorating the image quality at the same time. The predicted DOF was also affected by the IQMs and the threshold level adopted. We then investigated the influence of the chosen threshold level of the IQMs on the predicted DOF, and how it relates to the subjectively measured DOF. The subjective DOF was measured in a group of 17 normal subjects, and we used through-focus visual Strehl ratio based on optical transfer function (VSOTF) derived from their wavefront aberrations as the IQM to estimate the DOF. The results allowed comparison of the subjective DOF with the estimated DOF and determination of a threshold level for DOF estimation. Significant correlation was found between the subject.s estimated threshold level for the estimated DOF and HOA RMS (Pearson.s r=0.88, p<0.001). The linear correlation can be used to estimate the threshold level for each individual subject, subsequently leading to a method for estimating individual.s DOF from a single measurement of their wavefront aberrations. A subsequent study was conducted to investigate the DOF of keratoconic subjects. Significant increases of the level of HOAs, including spherical aberration, coma and trefoil, can be observed in keratoconic eyes. This population of subjects provides an opportunity to study the influence of these HOAs on DOF. It was also expected that the asymmetric aberrations (coma and trefoil) in the keratoconic eye could interact with defocus to cause regional blur of the target. A dual-Badal-channel optical system with a star-pattern target was used to measure the subjective DOF in 10 keratoconic eyes and compared to those from a group of 10 normal subjects. The DOF measured in keratoconic eyes was significantly larger than that in normal eyes. However there was not a strong correlation between the large amount of HOA RMS and DOF in keratoconic eyes. Among all HOA terms, spherical aberration was found to be the only HOA that helped to significantly increase the DOF in the studied keratoconic subjects. Through the first three studies, a comprehensive understanding of DOF and its association to the HOAs in the human eye had been achieved. An adaptive optics system was then designed and constructed. The system was capable of measuring and altering the wavefront aberrations in the subject.s eye and measuring the resulting DOF under the influence of different combination of HOAs. Using the AO system, we investigated the concept of extending the DOF through optimized combinations of 0 4 Z and 0 6 Z . Systematic introduction of a targeted amount of both 0 4 Z and 0 6 Z was found to significantly improve the DOF of healthy subjects. The use of wavefront combinations of 0 4 Z and 0 6 Z with opposite signs can further expand the DOF, rather than using 0 4 Z or 0 6 Z alone. The optimal wavefront combinations to expand the DOF were estimated using the ratio of increase in DOF and loss of retinal image quality defined by VSOTF. In the experiment, the optimal combinations of 0 4 Z and 0 6 Z were found to provide a better balance of DOF expansion and relatively smaller decreases in VA. Therefore, the optimal combinations of 0 4 Z and 0 6 Z provides a more efficient method to expand the DOF rather than 0 4 Z or 0 6 Z alone. This PhD research has shown that there is a positive correlation between the DOF and the eye.s wavefront aberrations. More aberrated eyes generally have a larger DOF. The association of DOF and the natural HOAs in normal subjects can be quantified, which allows the estimation of DOF directly from the ocular wavefront aberration. Among the Zernike HOA terms, spherical aberrations ( 0 4 Z and 0 6 Z ) were found to improve the DOF. Certain combinations of 0 4 Z and 0 6 Z provide a more effective method to expand DOF than using 0 4 Z or 0 6 Z alone, and this could be useful in the optimal design of presbyopic optical corrections such as multifocal contact lenses, intraocular lenses and laser corneal surgeries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The construction industry should be a priority to all governments because it impacts economically and socially on all citizens. Sector turnover in industrialised economies typically averages 8-12% of GDP. Further, construction is critical to economic growth. Recent Australian studies estimate that a 10% gain in efficiency in construction translates to a 2.5% increase in GDP ----- ----- Inefficiencies in the Australian construction industry have been identified by a number of recent studies modelling the building process. They have identified potential savings in time of between 25% and 40% by reducing non-value added steps in the process. A culture of reform is now emerging in the industry – one in which alternate forms of project delivery are being trial. ----- ----- Government and industry have identified Alliance Contracting as a means to increase efficiency in the construction industry as part of a new innovative procurement environment. Alliance contracting requires parties to form relationships and work cooperatively to provide a more complete service. This is a significant cultural change for the construction industry, with its well-known adversarial record in traditional contracting. Alliance contracts offer enormous potential benefits, but the Australian construction industry needs to develop new skills to effectively participate in the new relationship environment. ----- ----- This paper describes a collaborative project identifying skill needs for clients and construction professionals to more effectively participate in an increasingly sophisticated international procurement environment. The aim of identifying these skill needs is to assist industry, government, and skill developers to prepare the Australian construction workforce for the future. The collaborating Australian team has been fortunate to secure the Australian National Museum in Canberra as its live case study. The Acton Peninsula Development is the first major building development in the world awarded on the basis of a joint alliance contract.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a new approach is proposed for interpreting of regional frequencies in multi machine power systems. The method uses generator aggregation and system reduction based on coherent generators in each area. The reduced system structure is able to be identified and a kalman estimator is designed for the reduced system to estimate the inter-area modes using the synchronized phasor measurement data. The proposed method is tested on a six machine, three area test system and the obtained results show the estimation of inter-area oscillations in the system with a high accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to explore the road safety implications of illegal street racing and associated risky driving behaviours. This issue was considered in two ways: Phase 1 examined the descriptions of 848 illegal street racing and associated risky driving offences that occurred in Queensland, Australia, in order to estimate the risk associated with these behaviours; and Phase 2 examined the traffic and crash histories of the 802 male offenders involved in these offences, and compared them to those of an age-matched comparison group, in order to examine the risk associated with the driver. It was found in Phase 1 that only 3.7% of these offences resulted in a crash (none of which were fatal), and that these crashes tended to be single-vehicle crashes where the driver lost control of the vehicle and collided with a fixed object. Phase 2 found that the offender sample had significantly more traffic infringements, licence sanctions and crashes in the previous three years than the comparison group. It was concluded that while only a small proportion of racing and associated offences result in a crash, these offenders appear to be generally risky drivers that warrant special attention.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Bone healing is sensitive to the initial mechanical conditions with tissue differentiation being determined within days of trauma. Whilst axial compression is regarded as stimulatory, the role of interfragmentary shear is controversial. The purpose of this study was to determine how the initial mechanical conditions produced by interfragmentary shear and torsion differ from those produced by axial compressive movements. ----- ----- Methods: The finite element method was used to estimate the strain, pressure and fluid flow in the early callus tissue produced by the different modes of interfragmentary movement found in vivo. Additionally, tissue formation was predicted according to three principally different mechanobiological theories. ----- ----- Findings: Large interfragmentary shear movements produced comparable strains and less fluid flow and pressure than moderate axial interfragmentary movements. Additionally, combined axial and shear movements did not result in overall increases in the strains and the strain magnitudes were similar to those produced by axial movements alone. Only when axial movements where applied did the non-distortional component of the pressure–deformation theory influence the initial tissue predictions. ----- ----- Interpretation: This study found that the mechanical stimuli generated by interfragmentary shear and torsion differed from those produced by axial interfragmentary movements. The initial tissue formation as predicted by the mechanobiological theories was dominated by the deformation stimulus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the tradeoff between energy consumption and localization performance in a mobile sensor network application. The focus is on augmenting GPS location with more energy-efficient location sensors to bound position estimate uncertainty in order to prolong node lifetime. We use empirical GPS and radio contact data from a largescale animal tracking deployment to model node mobility, GPS and radio performance. These models are used to explore duty cycling strategies for maintaining position uncertainty within specified bounds. We then explore the benefits of using short-range radio contact logging alongside GPS as an energy-inexpensive means of lowering uncertainty while the GPS is off, and we propose a versatile contact logging strategy that relies on RSSI ranging and GPS lock back-offs for reducing the node energy consumption relative to GPS duty cycling. Results show that our strategy can cut the node energy consumption by half while meeting application specific positioning criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In vector space based approaches to natural language processing, similarity is commonly measured by taking the angle between two vectors representing words or documents in a semantic space. This is natural from a mathematical point of view, as the angle between unit vectors is, up to constant scaling, the only unitarily invariant metric on the unit sphere. However, similarity judgement tasks reveal that human subjects fail to produce data which satisfies the symmetry and triangle inequality requirements for a metric space. A possible conclusion, reached in particular by Tversky et al., is that some of the most basic assumptions of geometric models are unwarranted in the case of psychological similarity, a result which would impose strong limits on the validity and applicability vector space based (and hence also quantum inspired) approaches to the modelling of cognitive processes. This paper proposes a resolution to this fundamental criticism of of the applicability of vector space models of cognition. We argue that pairs of words imply a context which in turn induces a point of view, allowing a subject to estimate semantic similarity. Context is here introduced as a point of view vector (POVV) and the expected similarity is derived as a measure over the POVV's. Different pairs of words will invoke different contexts and different POVV's. Hence the triangle inequality ceases to be a valid constraint on the angles. We test the proposal on a few triples of words and outline further research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION. Following anterior thoracoscopic instrumentation and fusion for the treatment of thoracic AIS, implant related complications have been reported as high as 20.8%. Currently the magnitudes of the forces applied to the spine during anterior scoliosis surgery are unknown. The aim of this study was to measure the segmental compressive forces applied during anterior single rod instrumentation in a series of adolescent idiopathic scoliosis patients. METHODS. A force transducer was designed, constructed and retrofitted to a surgical cable compression tool, routinely used to apply segmental compression during anterior scoliosis correction. Transducer output was continuously logged during the compression of each spinal joint, the output at completion converted to an applied compression force using calibration data. The angle between adjacent vertebral body screws was also measured on intra-operative frontal plane fluoroscope images taken both before and after each joint compression. The difference in angle between the two images was calculated as an estimate for the achieved correction at each spinal joint. RESULTS. Force measurements were obtained for 15 scoliosis patients (Aged 11-19 years) with single thoracic curves (Cobb angles 47˚- 67˚). In total, 95 spinal joints were instrumented. The average force applied for a single joint was 540 N (± 229 N)ranging between 88 N and 1018 N. Experimental error in the force measurement, determined from transducer calibration was ± 43 N. A trend for higher forces applied at joints close to the apex of the scoliosis was observed. The average joint correction angle measured by fluoroscope imaging was 4.8˚ (±2.6˚, range 0˚-12.6˚). CONCLUSION. This study has quantified in-vivo, the intra-operative correction forces applied by the surgeon during anterior single rod instrumentation. This data provides a useful contribution towards an improved understanding of the biomechanics of scoliosis correction. In particular, this data will be used as input for developing patient-specific finite element simulations of scoliosis correction surgery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proportion of functional sequence in the human genome is currently a subject of debate. The most widely accepted figure is that approximately 5% is under purifying selection. In Drosophila, estimates are an order of magnitude higher, though this corresponds to a similar quantity of sequence. These estimates depend on the difference between the distribution of genomewide evolutionary rates and that observed in a subset of sequences presumed to be neutrally evolving. Motivated by the widening gap between these estimates and experimental evidence of genome function, especially in mammals, we developed a sensitive technique for evaluating such distributions and found that they are much more complex than previously apparent. We found strong evidence for at least nine well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least seven classes in an alignment of four mammals, including human. We also identified at least three rate classes in human ancestral repeats. By positing that the largest of these ancestral repeat classes is neutrally evolving, we estimate that the proportion of nonneutrally evolving sequence is 30% of human ancestral repeats and 45% of the aligned portion of the genome. However, we also question whether any of the classes represent neutrally evolving sequences and argue that a plausible alternative is that they reflect variable structure-function constraints operating throughout the genomes of complex organisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract As regional and continental carbon balances of terrestrial ecosystems become available, it becomes clear that the soils are the largest source of uncertainty. Repeated inventories of soil organic carbon (SOC) organized in soil monitoring networks (SMN) are being implemented in a number of countries. This paper reviews the concepts and design of SMNs in ten countries, and discusses the contribution of such networks to reducing the uncertainty of soil carbon balances. Some SMNs are designed to estimate country-specific land use or management effects on SOC stocks, while others collect soil carbon and ancillary data to provide a nationally consistent assessment of soil carbon condition across the major land-use/soil type combinations. The former use a single sampling campaign of paired sites, while for the latter both systematic (usually grid based) and stratified repeated sampling campaigns (5–10 years interval) are used with densities of one site per 10–1,040 km². For paired sites, multiple samples at each site are taken in order to allow statistical analysis, while for the single sites, composite samples are taken. In both cases, fixed depth increments together with samples for bulk density and stone content are recommended. Samples should be archived to allow for re-measurement purposes using updated techniques. Information on land management, and where possible, land use history should be systematically recorded for each site. A case study of the agricultural frontier in Brazil is presented in which land use effect factors are calculated in order to quantify the CO2 fluxes from national land use/management conversion matrices. Process-based SOC models can be run for the individual points of the SMN, provided detailed land management records are available. These studies are still rare, as most SMNs have been implemented recently or are in progress. Examples from the USA and Belgium show that uncertainties in SOC change range from 1.6–6.5 Mg C ha−1 for the prediction of SOC stock changes on individual sites to 11.72 Mg C ha−1 or 34% of the median SOC change for soil/land use/climate units. For national SOC monitoring, stratified sampling sites appears to be the most straightforward attribution of SOC values to units with similar soil/land use/climate conditions (i.e. a spatially implicit upscaling approach). Keywords Soil monitoring networks - Soil organic carbon - Modeling - Sampling design