999 resultados para Corresponding aldehydes
Resumo:
This paper presents observations of summertime anti-winds monitored under ideal conditions in the Lake Tekapo hydro-catchment situated in the central Southern Alps, New Zealand. Onset and cessation of anti-winds was observed to coincide with the change in phase of the surface limbs of thermally generated valley and mountain winds under settled anti-cyclonic conditions. Anti-winds were best developed in the early morning before surface heating and associated convective mixing of the valley atmosphere began to mask the boundaries between the surface based limb of the mountain-valley wind and the corresponding anti-wind. By mid-day, the anti-valley wind exceeded the height of the surrounding ridgeline and became embedded in the topographically channeled gradient wind. Observations presented here have both theoretical and applied implications with regard to the development of thermally generated wind systems in deep alpine valleys, and their role in the dispersion of air pollution.
Resumo:
We conducted two psychophysical experiments to investigate the relationship between processing mechanisms for exocentric distance and direction. In the first experiment, the task was to discriminate exocentric distances. In the second one, the task was to discriminate exocentric directions. The individual effects of distance and direction on each task were dissociated by analyzing their corresponding psychophysical functions. Under stereoscopicviewing conditions, distancejudgments of excentric intervals were not affected by exocentric direction. However, directionjudgments were influenced by the distance between the pair of stimuli. Therefore, the mechanism processing exocentric direction is dependent on exocentric distance, but the mechanism processing exocentric distance does not require exocentric: direction measures. As a result, we suggest that exocentric distance and direction are hierarchically processed, with distance preceding direction. Alternatively, and more probably, a necessary condition for processing the exocentric direction between two stimuli may be to know the location of each of them.
Resumo:
We consider a kinetic Ising model which represents a generic agent-based model for various types of socio-economic systems. We study the case of a finite (and not necessarily large) number of agents N as well as the asymptotic case when the number of agents tends to infinity. The main ingredient are individual decision thresholds which are either fixed over time (corresponding to quenched disorder in the Ising model, leading to nonlinear deterministic dynamics which are generically non-ergodic) or which may change randomly over time (corresponding to annealed disorder, leading to ergodic dynamics). We address the question how increasing the strength of annealed disorder relative to quenched disorder drives the system from non-ergodic behavior to ergodicity. Mathematically rigorous analysis provides an explicit and detailed picture for arbitrary realizations of the quenched initial thresholds, revealing an intriguing ""jumpy"" transition from non-ergodicity with many absorbing sets to ergodicity. For large N we find a critical strength of annealed randomness, above which the system becomes asymptotically ergodic. Our theoretical results suggests how to drive a system from an undesired socio-economic equilibrium (e. g. high level of corruption) to a desirable one (low level of corruption).
Resumo:
We conduct a theoretical analysis to investigate the double diffusion-driven convective instability of three-dimensional fluid-saturated geological fault zones when they are heated uniformly from below. The fault zone is assumed to be more permeable than its surrounding rocks. In particular, we have derived exact analytical solutions to the total critical Rayleigh numbers of the double diffusion-driven convective flow. Using the corresponding total critical Rayleigh numbers, the double diffusion-driven convective instability of a fluid-saturated three-dimensional geological fault zone system has been investigated. The related theoretical analysis demonstrates that: (1) The relative higher concentration of the chemical species at the top of the three-dimensional geological fault zone system can destabilize the convective flow of the system, while the relative lower concentration of the chemical species at the top of the three-dimensional geological fault zone system can stabilize the convective flow of the system. (2) The double diffusion-driven convective flow modes of the three-dimensional geological fault zone system are very close each other and therefore, the system may have the similar chance to pick up different double diffusion-driven convective flow modes, especially in the case of the fault thickness to height ratio approaching 0. (3) The significant influence of the chemical species diffusion on the convective instability of the three-dimensional geological fault zone system implies that the seawater intrusion into the surface of the Earth is a potential mechanism to trigger the convective flow in the shallow three-dimensional geological fault zone system.
Resumo:
The herbivory activity of the bordered patch larvae (Chlosyne lacinia, Lepidoptera) on leaves of a Brazilian population of Tithonia diversifolia and the antifeedant potential of its leaf rinse extract were investigated. The caterpillars fed only on the adaxial face, where the density of glandular trichomes is very low, and avoided the abaxial face, which contains high levels of trichomes. Deterrent activity against the larvae was observed in leaf discs treated with leaf rinse extract at concentrations of 1-5% of fresh leaf weight. High-performance liquid chromatography (HPLC) analysis indicated that sesquiterpene lactones are the main constituents of the glandular trichomes. Dichloromethane rinse extracts of the leaves and inflorescences were chemically investigated, and 16 compounds were isolated and identified: 14 sesquiterpene lactones, a flavonoid and a diterpenoid. In this study, five sesquiterpene lactones are described for the first time in the genus, including two lactones, one of which has an unusual seco-guaianolide skeleton. Our findings indicate that the caterpillars avoid the sesquiterpene-lactone-rich glandular trichomes, and provide evidence for the antifeedant activity of the dichloromethane leaf rinse extract. In addition, a study of the seasonal variation of the main constituents from the leaf surface throughout a year demonstrated that a very low qualitative but a very high quantitative variation occurs. The highest level of the main metabolite tagitinin C was observed between September and October and the lowest was from March to June, the later corresponding to the period of highest infestation by the larvae. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper examines the syntax of indirect objects (IO) in Brazilian Portuguese (BP). Adopting a comparative perspective we propose that BP differs from European Portuguese (EP) in the grammatical encoding of IO. In EP ditransitive contexts, IO is found in two configurations - one projected by a (low) applicative head and another one involving a lexical/true preposition. We propose that the former property is contingent upon the presence of dative Case marking: namely, the morpheme `a` that introduces IO (a-DP), whose corresponding clitic pronoun is `lhe/lhes`. In contrast, important changes in the pronominal system, coupled with the increase in the use of the preposition `para` are taken as evidence for the loss of the low applicative construction in BP. Thus only the configuration with the lexical/true preposition is found in (Standard) BP. We argue that the innovative properties of IO in BP are due to the loss of the (3rd person) dative clitic and the preposition `a` as dative Case markers. Under this view, we further account for the realization of IO as a DP/weak pronoun, found in dialects of the central region of Brazil, which points to a similarity with the English Double Object Construction. Finally we show that the connection between the morphological expression of the dative Case and the expression of parameters supports a view of syntactic change according to which parametric variation is determined in the lexicon, in terms of the formal features of functional heads.
Resumo:
Background: A survey of pathology reporting of breast cancer in Western Australia in 1989 highlighted the need for improvement. The current study documents (1) changes in pathology reporting from 1989 to 1999 and (2) changes in patterns of histopathological prognostic indicators for breast cancer following introduction of mammographic screening in 1989. Methods: Data concerning all breast cancer cases reported in Western Australia in 1989, 1994 and 1999 were retrieved using the State Cancer Registry, Hospital Morbidity data system, and pathology laboratory records. Results: Pathology reports improved in quality during the decade surveyed. For invasive carcinoma, tumour size was not recorded in 1.2% of pathology reports in 1999 compared with 16.1% in 1989 (rho<0.001). Corresponding figures for other prognostic factors were: tumour grade 3.3% and 51.6% (rho<0.001), tumour type 0.2% and 4.1% (rho<0.001), vascular invasion 3.7% and 70.9% (rho<0.001), and lymph node status 1.9% and 4.5% (rho=0.023). In 1999, 5.9% of reports were not in a synoptic/checklist format, whereas all reports were descriptive in 1989 (rho<0.001). For the population as a whole, the proportion of invasive carcinomas <1 cm was 20.9% in 1999 compared with 14.5% in 1989 (rho<0.001); for tumours <2 cm the corresponding figures were 65.4% and 59.7% (rho=0.013). In 1999, 30.5% of tumours were histologically well-differentiated compared with 10.6% in 1989 (rho<0.001), and 61.7% were lymph node negative in 1999 compared with 57.1% in 1989 (rho=0.006). Pure ductal carcinoma in situ (DCIS) constituted 10.9% and 7.9% of total cases of breast carcinoma in 1999 and 1989, respectively (rho=0.01). Conclusions: Quality of pathology reporting improved markedly over the period, in parallel with adoption of stanclardised synoptic pathology reports. By 1999, recording of important prognostic information was almost complete. Frequency of favourable prognostic factors generally increased over time, reflecting expected effects of mammographic screening.
Resumo:
The main problem with current approaches to quantum computing is the difficulty of establishing and maintaining entanglement. A Topological Quantum Computer (TQC) aims to overcome this by using different physical processes that are topological in nature and which are less susceptible to disturbance by the environment. In a (2+1)-dimensional system, pseudoparticles called anyons have statistics that fall somewhere between bosons and fermions. The exchange of two anyons, an effect called braiding from knot theory, can occur in two different ways. The quantum states corresponding to the two elementary braids constitute a two-state system allowing the definition of a computational basis. Quantum gates can be built up from patterns of braids and for quantum computing it is essential that the operator describing the braiding-the R-matrix-be described by a unitary operator. The physics of anyonic systems is governed by quantum groups, in particular the quasi-triangular Hopf algebras obtained from finite groups by the application of the Drinfeld quantum double construction. Their representation theory has been described in detail by Gould and Tsohantjis, and in this review article we relate the work of Gould to TQC schemes, particularly that of Kauffman.
Resumo:
Numerical methods are used to simulate the double-diffusion driven convective pore-fluid flow and rock alteration in three-dimensional fluid-saturated geological fault zones. The double diffusion is caused by a combination of both the positive upward temperature gradient and the positive downward salinity concentration gradient within a three-dimensional fluid-saturated geological fault zone, which is assumed to be more permeable than its surrounding rocks. In order to ensure the physical meaningfulness of the obtained numerical solutions, the numerical method used in this study is validated by a benchmark problem, for which the analytical solution to the critical Rayleigh number of the system is available. The theoretical value of the critical Rayleigh number of a three-dimensional fluid-saturated geological fault zone system can be used to judge whether or not the double-diffusion driven convective pore-fluid flow can take place within the system. After the possibility of triggering the double-diffusion driven convective pore-fluid flow is theoretically validated for the numerical model of a three-dimensional fluid-saturated geological fault zone system, the corresponding numerical solutions for the convective flow and temperature are directly coupled with a geochemical system. Through the numerical simulation of the coupled system between the convective fluid flow, heat transfer, mass transport and chemical reactions, we have investigated the effect of the double-diffusion driven convective pore-fluid flow on the rock alteration, which is the direct consequence of mineral redistribution due to its dissolution, transportation and precipitation, within the three-dimensional fluid-saturated geological fault zone system. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Background and Purpose. There has been a lot of debate about the use of predicted oxygen consumption to calculate pulmonary vascular resistance using the Fick principle. We therefore comparatively analyzed predicted oxygen consumption in infants and children in specific age groups, using different methods (formulas), as an attempt to better understand the usefulness and limitations of predictions. Methods and Results. Four models (LaFarge & Miettinen, Bergstra et al., Lindahl, and Lundell et al.) were used to predict oxygen consumption in 200 acyanotic patients with congenital cardiac defects aged 0-2.0, > 2.0-4.0, > 4.0-6.0, and > 6.0-8.75 years (median 2.04 years). Significant differences were observed between the age groups (P < .001) and between the methods (P < .001), not related to diagnoses. Differences between methods were more impressive in the first age group (P < .01). In patients aged 0-2.0 years, the lowest values of oxygen consumption (corresponding to the highest estimates of pulmonary vascular resistance) were obtained with the method of Lindahl; above this age, any method except that of Lundell et al. Conclusions. Although measuring oxygen consumption is always preferable, a rational use of predictions, using different methods, may be of help in situations where measurements are definitely not possible.
Resumo:
Objectives: Lung hyperinflation may be assessed by computed tomography (CT). As shown for patients with emphysema, however, CT image reconstruction affects quantification of hyperinflation. We studied the impact of reconstruction parameters on hyperinflation measurements in mechanically ventilated (MV) patients. Design: Observational analysis. Setting: A University hospital-affiliated research Unit. Patients: The patients were MV patients with injured (n = 5) or normal lungs (n = 6), and spontaneously breathing patients (n = 5). Interventions: None. Measurements and results: Eight image series involving 3, 5, 7, and 10 mm slices and standard and sharp filters were reconstructed from identical CT raw data. Hyperinflated (V-hyper), normally (V-normal), poorly (V-poor), and nonaerated (V-non) volumes were calculated by densitometry as percentage of total lung volume (V-total). V-hyper obtained with the sharp filter systematically exceeded that with the standard filter showing a median (interquartile range) increment of 138 (62-272) ml corresponding to approximately 4% of V-total. In contrast, sharp filtering minimally affected the other subvolumes (V-normal, V-poor, V-non, and V-total). Decreasing slice thickness also increased V-hyper significantly. When changing from 10 to 3 mm thickness, V-hyper increased by a median value of 107 (49-252) ml in parallel with a small and inconsistent increment in V-non of 12 (7-16) ml. Conclusions: Reconstruction parameters significantly affect quantitative CT assessment of V-hyper in MV patients. Our observations suggest that sharp filters are inappropriate for this purpose. Thin slices combined with standard filters and more appropriate thresholds (e.g., -950 HU in normal lungs) might improve the detection of V-hyper. Different studies on V-hyper can only be compared if identical reconstruction parameters were used.
Resumo:
OBJECTIVE. Coronary MDCT angiography has been shown to be an accurate noninvasive tool for the diagnosis of obstructive coronary artery disease (CAD). Its sensitivity and negative predictive value for diagnosing percentage of stenosis are unsurpassed compared with those of other noninvasive testing methods. However, in its current form, it provides no information regarding the physiologic impact of CAD and is a poor predictor of myocardial ischemia. CORE320 is a multicenter multinational diagnostic study with the primary objective to evaluate the diagnostic accuracy of 320-MDCT for detecting coronary artery luminal stenosis and corresponding myocardial perfusion deficits in patients with suspected CAD compared with the reference standard of conventional coronary angiography and SPECT myocardial perfusion imaging. CONCLUSION. We aim to describe the CT acquisition, reconstruction, and analysis methods of the CORE320 study.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
We describe the mechanism of ribonuclease inhibition by ribonuclease inhibitor, a protein built of leucine-rich repeats, based on the crystal structure of the complex between the inhibitor and ribonuclease A. The structure was determined by molecular replacement and refined to an R(cryst) of 19.4% at 2.5 Angstrom resolution. Ribonuclease A binds to the concave region of the inhibitor protein comprising its parallel beta-sheet and loops. The inhibitor covers the ribonuclease active site and directly contacts several active-site residues. The inhibitor only partially mimics the RNase-nucleotide interaction and does not utilize the pi phosphate-binding pocket of ribonuclease A, where a sulfate ion remains bound. The 2550 Angstrom(2) of accessible surface area buried upon complex formation may be one of the major contributors to the extremely tight association (K-i = 5.9 x 10(-14) M). The interaction is predominantly electrostatic; there is a high chemical complementarity with 18 putative hydrogen bonds and salt links, but the shape complementarity is lower than in most other protein-protein complexes. Ribonuclease inhibitor changes its conformation upon complex formation; the conformational change is unusual in that it is a plastic reorganization of the entire structure without any obvious hinge and reflects the conformational flexibility of the structure of the inhibitor. There is a good agreement between the crystal structure and other biochemical studies of the interaction. The structure suggests that the conformational flexibility of RI and an unusually large contact area that compensates for a lower degree of complementarity may be the principal reasons for the ability of RI to potently inhibit diverse ribonucleases. However, the inhibition is lost with amphibian ribonucleases that have substituted most residues corresponding to inhibitor-binding residues in RNase A, and with bovine seminal ribonuclease that prevents inhibitor binding by forming a dimer. (C) 1996 Academic Press Limited
Resumo:
Aldehyde dehydrogenases (ALDHs) catabolize toxic aldehydes and process the vitamin A-derived retinaldehyde into retinoic acid (RA), a small diffusible molecule and a pivotal chordate morphogen. In this study, we combine phylogenetic, structural, genomic, and developmental gene expression analyses to examine the evolutionary origins of ALDH substrate preference. Structural modeling reveals that processing of small aldehydes, such as acetaldehyde, by ALDH2, versus large aldehydes, including retinaldehyde, by ALDH1A is associated with small versus large substrate entry channels (SECs), respectively. Moreover, we show that metazoan ALDH1s and ALDH2s are members of a single ALDH1/2 clade and that during evolution, eukaryote ALDH1/2s often switched between large and small SECs after gene duplication, transforming constricted channels into wide opened ones and vice versa. Ancestral sequence reconstructions suggest that during the evolutionary emergence of RA signaling, the ancestral, narrow-channeled metazoan ALDH1/2 gave rise to large ALDH1 channels capable of accommodating bulky aldehydes, such as retinaldehyde, supporting the view that retinoid-dependent signaling arose from ancestral cellular detoxification mechanisms. Our analyses also indicate that, on a more restricted evolutionary scale, ALDH1 duplicates from invertebrate chordates (amphioxus and ascidian tunicates) underwent switches to smaller and narrower SECs. When combined with alterations in gene expression, these switches led to neofunctionalization from ALDH1-like roles in embryonic patterning to systemic, ALDH2-like roles, suggesting functional shifts from signaling to detoxification.