929 resultados para Direct Strength Method and Experiments
Resumo:
An improved understanding of soil organic carbon (Corg) dynamics in interaction with the mechanisms of soil structure formation is important in terms of sustainable agriculture and reduction of environmental costs of agricultural ecosystems. However, information on physical and chemical processes influencing formation and stabilization of water stable aggregates in association with Corg sequestration is scarce. Long term soil experiments are important in evaluating open questions about management induced effects on soil Corg dynamics in interaction with soil structure formation. The objectives of the present thesis were: (i) to determine the long term impacts of different tillage treatments on the interaction between macro aggregation (>250 µm) and light fraction (LF) distribution and on C sequestration in plots differing in soil texture and climatic conditions. (ii) to determine the impact of different tillage treatments on temporal changes in the size distribution of water stable aggregates and on macro aggregate turnover. (iii) to evaluate the macro aggregate rebuilding in soils with varying initial Corg contents, organic matter (OM) amendments and clay contents in a short term incubation experiment. Soil samples were taken in 0-5 cm, 5-25 cm and 25-40 cm depth from up to four commercially used fields located in arable loess regions of eastern and southern Germany after 18-25 years of different tillage treatments with almost identical experimental setups per site. At each site, one large field with spatially homogenous soil properties was divided into three plots. One of the following three tillage treatments was carried in each plot: (i) Conventional tillage (CT) with annual mouldboard ploughing to 25-30 cm (ii) mulch tillage (MT) with a cultivator or disc harrow 10-15 cm deep, and (iii) no tillage (NT) with direct drilling. The crop rotation at each site consisted of sugar beet (Beta vulgaris L.) - winter wheat (Triticum aestivum L.) - winter wheat. Crop residues were left on the field and crop management was carried out following the regional standards of agricultural practice. To investigate the above mentioned research objectives, three experiments were conducted: Experiment (i) was performed with soils sampled from four sites in April 2010 (wheat stand). Experiment (ii) was conducted with soils sampled from three sites in April 2010, September 2011 (after harvest or sugar beet stand), November 2011 (after tillage) and April 2012 (bare soil or wheat stand). An incubation study (experiment (iii)) was performed with soil sampled from one site in April 2010. Based on the aforementioned research objectives and experiments the main findings were: (i) Consistent results were found between the four long term tillage fields, varying in texture and climatic conditions. Correlation analysis of the yields of macro aggregate against the yields of free LF ( ≤1.8 g cm-3) and occluded LF, respectively, suggested that the effective litter translocation in higher soil depths and higher litter input under CT and MT compensated in the long term the higher physical impact by tillage equipment than under NT. The Corg stocks (kg Corg m−2) in 522 kg soil, based on the equivalent soil mass approach (CT: 0–40 cm, MT: 0–38 cm, NT: 0–36 cm) increased in the order CT (5.2) = NT (5.2) < MT (5.7). Significantly (p ≤ 0.05) highest Corg stocks under MT were probably a result of high crop yields in combination with reduced physical tillage impact and effective litter incorporation, resulting in a Corg sequestration rate of 31 g C-2 m-2 yr-1. (ii) Significantly higher yields of macro aggregates (g kg-2 soil) under NT (732-777) and MT (680-726) than under CT (542-631) were generally restricted to the 0-5 cm sampling depth for all sampling dates. Temporal changes on aggregate size distribution were only small and no tillage induced net effect was detectable. Thus, we assume that the physical impact by tillage equipment was only small or the impact was compensated by a higher soil mixing and effective litter translocation into higher soil depths under CT, which probably resulted in a high re aggregation. (iii) The short term incubation study showed that macro aggregate yields (g kg-2 soil) were higher after 28 days in soils receiving OM (121.4-363.0) than in the control soils (22.0-52.0), accompanied by higher contents of microbial biomass carbon and ergosterol. Highest soil respiration rates after OM amendments within the first three days of incubation indicated that macro aggregate formation is a fast process. Most of the rebuilt macro aggregates were formed within the first seven days of incubation (42-75%). Nevertheless, it was ongoing throughout the entire 28 days of incubation, which was indicated by higher soil respiration rates at the end of the incubation period in OM amended soils than in the control soils. At the same time, decreasing carbon contents within macro aggregates over time indicated that newly occluded OM within the rebuilt macro aggregates served as Corg source for microbial biomass. The different clay contents played only minor role in macro aggregate formation under the particular conditions of the incubation study. Overall, no net changes on macro aggregation were identified in the short term. Furthermore, no indications for an effective Corg sequestration on the long term under NT in comparison to CT were found. The interaction of soil disturbance, litter distribution and the fast re aggregation suggested that a distinct steady state per tillage treatment in terms of soil aggregation was established. However, continuous application of MT with a combination of reduced physical tillage impact and effective litter incorporation may offer some potential in improving the soil structure and may therefore prevent incorporated LF from rapid decomposition and result in a higher C sequestration on the long term.
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
Background: Isometric grip strength, evaluated with a handgrip dynamometer, is a marker of current nutritional status and cardiometabolic risk and future morbidity and mortality. We present reference values for handgrip strength in healthy young Colombian adults (aged 18 to 29 years). Methods: The sample comprised 5.647 (2.330 men and 3.317 women) apparently healthy young university students (mean age, 20.6±2.7 years) attending public and private institutions in the cities of Bogota and Cali (Colombia). Handgrip strength was measured two times with a TKK analogue dynamometer in both hands and the highest value used in the analysis. Sex- and age-specific normative values for handgrip strength were calculated using the LMS method and expressed as tabulated percentiles from 3 to 97 and as smoothed centile curves (P3, P10, P25, P50, P75, P90 and P97). Results: Mean values for right and left handgrip strength were 38.1±8.9 and 35.9±8.6 kg for men, and 25.1±8.7 and 23.3±8.2 kg for women, respectively. Handgrip strength increased with age in both sexes and was significantly higher in men in all age categories. The results were generally more homogeneous amongst men than women. Conclusions: Sex- and age-specific handgrip strength normative values among healthy young Colombian adults are defined. This information may be helpful in future studies of secular trends in handgrip strength and to identify clinically relevant cut points for poor nutritional and elevated cardiometabolic risk in a Latin American population. Evidence of decline in handgrip strength before the end of the third decade is of concern and warrants further investigation
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Resumo:
1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.
Resumo:
This paper compares and contrasts, for the first time, one- and two-component gelation systems that are direct structural analogues and draws conclusions about the molecular recognition pathways that underpin fibrillar self-assembly. The new one-component systems comprise L-lysine-based dendritic headgroups covalently connected to an aliphatic diamine spacer chain via an amide bond, One-component gelators with different generations of headgroup (from first to third generation) and different length spacer chains are reported. The self-assembly of these dendrimers in toluene was elucidated using thermal measurements, circular dichroism (CD) and NMR spectroscopies, scanning electron microscopy (SEM), and small-angle X-ray scattering (SAXS). The observations are compared with previous results for the analogous two-component gelation system in which the dendritic headgroups are bound to the aliphatic spacer chain noncovalently via acid-amine interactions. The one-component system is inherently a more effective gelator, partly as a consequence of the additional covalent amide groups that provide a new hydrogen bonding molecular recognition pathway, whereas the two-component analogue relies solely on intermolecular hydrogen bond interactions between the chiral dendritic headgroups. Furthermore, because these amide groups are important in the assembly process for the one-component system, the chiral information preset in the dendritic headgroups is not always transcribed into the nanoscale assembly, whereas for the two-component system, fiber formation is always accompanied by chiral ordering because the molecular recognition pathway is completely dependent on hydrogen bond interactions between well-organized chiral dendritic headgroups.
Resumo:
Why it is easier to cut with even the sharpest knife when 'pressing down and sliding' than when merely 'pressing down alone' is explained. A variety of cases of cutting where the blade and workpiece have different relative motions is analysed and it is shown that the greater the 'slice/push ratio' xi given by ( blade speed parallel to the cutting edge/blade speed perpendicular to the cutting edge), the lower the cutting forces. However, friction limits the reductions attainable at the highest.. The analysis is applied to the geometry of a wheel cutting device (delicatessan slicer) and experiments with a cheddar cheese and a salami using such an instrumented device confirm the general predictions. (C) 2004 Kluwer Academic Publishers.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
The phase separation behaviour in aqueous mixtures of poly(methyl vinyl ether) and hydroxypropylcellulose has been studied by cloud points method and viscometric measurements. The miscibility of these blends in solid state has been assessed by infrared spectroscopy; methanol vapours sorption experiments and scanning electron microscopy. The values of Gibbs energy of mixing of the polymers and their blends with methanol as well as between each other were calculated. It was found that in solid state the polymers can interact with methanol very well but the polymer-polymer interactions are unfavourable. Although in aqueous solutions the polymers exhibit some intermolecular interactions their solid blends are not completely miscible. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.
Resumo:
A method and oligonucleotide compound for inhibiting replication of a nidovirus in virus-infected animal cells are disclosed. The compound (i) has a nuclease-resistant backbone, (ii) is capable of uptake by the infected cells, (iii) contains between 8-25 nucleotide bases, and (iv) has a sequence capable of disrupting base pairing between the transcriptional regulatory sequences in the 5′ leader region of the positive-strand viral genome and negative-strand 3′ subgenomic region. In practicing the method, infected cells are exposed to the compound in an amount effective to inhibit viral replication.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.