176 resultados para Simple methods
Resumo:
Dispersal, or the amount of dispersion between an individual's birthplace and that of its offspring, is of great importance in population biology, behavioural ecology and conservation, however, obtaining direct estimates from field data on natural populations can be problematic. The prickly forest skink, Gnypetoscincus queenslandiae, is a rainforest endemic skink from the wet tropics of Australia. Because of its log-dwelling habits and lack of definite nesting sites, a demographic estimate of dispersal distance is difficult to obtain. Neighbourhood size, defined as 4 piD sigma (2) (where D is the population density and sigma (2) the mean axial squared parent-offspring dispersal rate), dispersal and density were estimated directly and indirectly for this species using mark-recapture and microsatellite data, respectively, on lizards captured at a local geographical scale of 3 ha. Mark-recapture data gave a dispersal rate of 843 m(2)/generation (assuming a generation time of 6.5 years), a time-scaled density of 13 635 individuals * generation/km(2) and, hence, a neighbourhood size of 144 individuals. A genetic method based on the multilocus (10 loci) microsatellite genotypes of individuals and their geographical location indicated that there is a significant isolation by distance pattern, and gave a neighbourhood size of 69 individuals, with a 95% confidence interval between 48 and 184. This translates into a dispersal rate of 404 m(2)/generation when using the mark-recapture density estimation, or an estimate of time-scaled population density of 6520 individuals * generation/km(2) when using the mark-recapture dispersal rate estimate. The relationship between the two categories of neighbourhood size, dispersal and density estimates and reasons for any disparities are discussed.
Resumo:
This paper presents the comparison of surface diffusivities of hydrocarbons in activated carbon. The surface diffusivities are obtained from the analysis of kinetic data collected using three different kinetics methods- the constant molar flow, the differential adsorption bed and the differential permeation methods. In general the values of surface diffusivity obtained by these methods agree with each other, and it is found that the surface diffusivity increases very fast with loading. Such a fast increase can not be accounted for by a thermodynamic Darken factor, and the surface heterogeneity only partially accounts for the fast rise of surface diffusivity versus loading. Surface diffusivities of methane, ethane, propane, n-butane, n-hexane, benzene and ethanol on activated carbon are reported in this paper.
Resumo:
Background: Adrenaline is localized to specific regions of the central nervous system (CNS), but its role therein is unclear because of a lack of suitable pharmacologic agents. Ideally, a chemical is required that crosses the blood-brain barrier, potently inhibits the adrenaline-synthesizing enzyme PNMT, and does not affect other catecholamine processes. Currently available PNMT inhibitors do not meet these criteria. We aim to produce potent, selective, and CNS-active PNMT inhibitors by structure-based design methods. The first step is the structure determination of PNMT. Results: We have solved the crystal structure of human PNMT complexed with a cofactor product and a submicromolar inhibitor at a resolution of 2.4 Angstrom. The structure reveals a highly decorated methyltransferase fold, with an active site protected from solvent by an extensive cover formed from several discrete structural motifs. The structure of PNMT shows that the inhibitor interacts with the enzyme in a different mode from the (modeled) substrate noradrenaline. Specifically, the position and orientation of the amines is not equivalent. Conclusions: An unexpected finding is that the structure of PNMT provides independent evidence of both backward evolution and fold recruitment in the evolution of a complex enzyme from a simple fold. The proposed evolutionary pathway implies that adrenaline, the product of PNMT catalysis, is a relative newcomer in the catecholamine family. The PNMT structure reported here enables the design of potent and selective inhibitors with which to characterize the role of adrenaline in the CNS. Such chemical probes could potentially be useful as novel therapeutics.
Resumo:
This article describes a new test method for the assessment of the severity of environmental stress cracking of biomedical polyurethanes in a manner that minimizes the degree of subjectivity involved. The effect of applied strain and acetone pre-treatment on degradation of Pellethane 2363 80A and Pellethane 2363 55D polyurethanes under in vitro and in vivo conditions is studied. The results are presented using a magnification-weighted image rating system that allows the semi-quantitative rating of degradation based on distribution and severity of surface damage. Devices for applying controlled strain to both flat sheet and tubing samples are described. The new rating system consistently discriminated between. the effects of acetone pre-treatments, strain and exposure times in both in vitro and in vivo experiments. As expected, P80A underwent considerable stress cracking compared with P55D. P80A produced similar stress crack ratings in both in vivo and in vitro experiments, however P55D performed worse under in vitro conditions compared with in vivo. This result indicated that care must be taken when interpreting in vitro results in the absence of in vivo data. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A series of metal-matrix composites were formed by extrusion freeform, fabrication of a sinterable aluminum alloy in combination with silicon carbide particles and whiskers, carbon fibers, alumina particles, and hollow flyash cenospheres. Silicon carbide particles were most successful in that the composites retained high density with up to 20 vol% of reinforcement and the strength approximately doubles over the strength of the metal matrix alone. Comparison with simple models suggests that this unexpectedly high degree of reinforcement can be attributed to the concentration of small silicon carbide particles around the larger metal powder. This fabrication method also allows composites to be formed with hollow spheres that cannot be formed by other powder or melt methods.
Resumo:
In situ gelatin zymography is a simple technique providing valuable information about the cellular and tissue localization of gelatinases. Until recently, the use of this technique has been confined to soft, relatively homogeneous tissue. In this report in situ zymography has been utilized to assess the sub-lamellar location of gelatinases in the hard, semi-keratinized epidermal layer and the adjacent soft connective tissue matrix of the dermis of the equine hoof. We show that alterations in the orientation at which the tissue is dipped and withdrawn from the emulsion cause profound alterations in emulsion thickness. Microscopic Variations in the surface topography of frozen tissue sections also influence emulsion thickness making interpretation of the results difficult. Given these results, researchers must be aware of potential variations in zymographic analysis may be influenced by physical tissue parameters in addition to suspected gelatinase activity. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Qualitative data analysis (QDA) is often a time-consuming and laborious process usually involving the management of large quantities of textual data. Recently developed computer programs offer great advances in the efficiency of the processes of QDA. In this paper we report on an innovative use of a combination of extant computer software technologies to further enhance and simplify QDA. Used in appropriate circumstances, we believe that this innovation greatly enhances the speed with which theoretical and descriptive ideas can be abstracted from rich, complex, and chaotic qualitative data. © 2001 Human Sciences Press, Inc.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
Reaching out to grasp an object (prehension) is a deceptively elegant and skilled behavior. The movement prior to object contact can be described as having two components [1], the movement of the hand to an appropriate location for gripping the object, the transport component, and the opening and closing of the aperture between the fingers as they prepare to grip the target, the grasp component. The grasp component is sensitive to the size of the object, so that a larger grasp aperture is formed for wider objects [1]; the maximum grasp aperture (MGA) is a little wider than the width of the target object and occurs later in the movement for larger objects [1, 2]. We present a simple model that can account for the temporal relationship between the transport and grasp components, We report the results of an experiment providing empirical support for our rule of thumb. The model provides a simple, but plausible, account of a neural control strategy that has been the center of debate over the last two decades.
Resumo:
Low-micromolar concentrations of sulfite, thiosulfate and sulfide, present in synthetic wastewater or anaerobic digester effluent, were quantified by means of derivatization with monobromobimane, followed by HPLC separation with fluorescence detection. The concentration of elemental sulfur was determined, after its extraction with chloroform from the derivatized sample, by HPLC with UV detection. Recoveries of sulfide (both matrices), and of thiosulfate and sulfite (synthetic wastewater) were between 98 and 103%. The in-run RSDs on separate derivatizations were 13 and 19% for sulfite (two tests), between 1.5 and 6.6% for thiosulfate (two tests) and between 4.1 and 7.7% for sulfide (three tests). Response factors for derivatives of sulfide and thiosulfate, but not sulfite, were steady over a 13-month period during which 730 samples were analysed. Dithionate and tetrathionate did not seem to be detectable with this method. The distinctness of the elemental sulfur and the derivatizing-agent peaks was improved considerably by detecting elution at 297 instead of 263 nm. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Application of novel analytical and investigative methods such as fluorescence in situ hybridization, confocal laser scanning microscopy (CLSM), microelectrodes and advanced numerical simulation has led to new insights into micro-and macroscopic processes in bioreactors. However, the question is still open whether or not these new findings and the subsequent gain of knowledge are of significant practical relevance and if so, where and how. To find suitable answers it is necessary for engineers to know what can be expected by applying these modern analytical tools. Similarly, scientists could benefit significantly from an intensive dialogue with engineers in order to find out about practical problems and conditions existing in wastewater treatment systems. In this paper, an attempt is made to help bridge the gap between science and engineering in biological wastewater treatment. We provide an overview of recently developed methods in microbiology and in mathematical modeling and numerical simulation. A questionnaire is presented which may help generate a platform from which further technical and scientific developments can be accomplished. Both the paper and the questionnaire are aimed at encouraging scientists and engineers to enter into an intensive, mutually beneficial dialogue. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This note gives a theory of state transition matrices for linear systems of fuzzy differential equations. This is used to give a fuzzy version of the classical variation of constants formula. A simple example of a time-independent control system is used to illustrate the methods. While similar problems to the crisp case arise for time-dependent systems, in time-independent cases the calculations are elementary solutions of eigenvalue-eigenvector problems. In particular, for nonnegative or nonpositive matrices, the problems at each level set, can easily be solved in MATLAB to give the level sets of the fuzzy solution. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.