957 resultados para first-order paraconsistent logic
Resumo:
Artificial diagenesis of the intra-crystalline proteins isolated from Patella vulgata was induced by isothermal heating at 140 °C, 110 °C and 80 °C. Protein breakdown was quantified for multiple amino acids, measuring the extent of peptide bond hydrolysis, amino acid racemisation and decomposition. The patterns of diagenesis are complex; therefore the kinetic parameters of the main reactions were estimated by two different methods: 1) a well-established approach based on fitting mathematical expressions to the experimental data, e.g. first-order rate equations for hydrolysis and power-transformed first-order rate equations for racemisation; and 2) an alternative model-free approach, which was developed by estimating a “scaling” factor for the independent variable (time) which produces the best alignment of the experimental data. This method allows the calculation of the relative reaction rates for the different temperatures of isothermal heating. High-temperature data were compared with the extent of degradation detected in sub-fossil Patella specimens of known age, and we evaluated the ability of kinetic experiments to mimic diagenesis at burial temperature. The results highlighted a difference between patterns of degradation at low and high temperature and therefore we recommend caution for the extrapolation of protein breakdown rates to low burial temperatures for geochronological purposes when relying solely on kinetic data.
Resumo:
Natural mineral aerosol (dust) is an active component of the climate system and plays multiple roles in mediating physical and biogeochemical exchanges between the atmosphere, land surface and ocean. Changes in the amount of dust in the atmosphere are caused both by changes in climate (precipitation, wind strength, regional moisture balance) and changes in the extent of dust sources caused by either anthropogenic or climatically induced changes in vegetation cover. Models of the global dust cycle take into account the physical controls on dust deflation from prescribed source areas (based largely on soil wetness and vegetation cover thresholds), dust transport within the atmospheric column, and dust deposition through sedimentation and scavenging by precipitation. These models successfully reproduce the first-order spatial and temporal patterns in atmospheric dust loading under modern conditions. Atmospheric dust loading was as much as an order-of-magnitude larger than today during the last glacial maximum (LGM). While the observed increase in emissions from northern Africa can be explained solely in terms of climate changes (colder, drier and windier glacial climates), increased emissions from other regions appear to have been largely a response to climatically induced changes in vegetation cover and hence in the extent of dust source areas. Model experiments suggest that the increased dust loading in tropical regions had an effect on radiative forcing comparable to that of low glacial CO2 levels. Changes in land-use are already increasing the dust loading of the atmosphere. However, simulations show that anthropogenically forced climate changes substantially reduce the extent and productivity of natural dust sources. Positive feedbacks initiated by a reduction of dust emissions from natural source areas on both radiative forcing and atmospheric CO2 could substantially mitigate the impacts of land-use changes, and need to be considered in climate change assessments.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.
Resumo:
Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.
Resumo:
The work involves investigation of a type of wireless power system wherein its analysis will yield the construction of a prototype modeled as a singular technological artifact. It is through exploration of the artifact that forms the intellectual basis for not only its prototypical forms, but suggestive of variant forms not yet discovered. Through the process it is greatly clarified the role of the artifact, its most suitable application given the constraints on the delivery problem, and optimization strategies to improve it. In order to improve maturity and contribute to a body of knowledge, this document proposes research utilizing mid-field region, efficient inductive-transfer for the purposes of removing wired connections and electrical contacts. While the description seems enough to state the purpose of this work, it does not convey the compromises of having to redraw the lines of demarcation between near and far-field in the traditional method of broadcasting. Two striking scenarios are addressed in this thesis: Firstly, the mathematical explanation of wireless power is due to J.C. Maxwell's original equations, secondly, the behavior of wireless power in the circuit is due to Joseph Larmor's fundamental works on the dynamics of the field concept. A model of propagation will be presented which matches observations in experiments. A modified model of the dipole will be presented to address the phenomena observed in the theory and experiments. Two distinct sets of experiments will test the concept of single and two coupled-modes. In a more esoteric context of the zero and first-order magnetic field, the suggestion of a third coupled-mode is presented. Through the remaking of wireless power in this context, it is the intention of the author to show the reader that those things lost to history, bound to a path of complete obscurity, are once again innovative and useful ideas.
Resumo:
The time discretization in weather and climate models introduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leapfrog integrations from first-order to fifth-order. This improvement is achieved by replacing the Robert--Asselin filter with the RAW filter and using a linear combination of the unfiltered and filtered states to compute the tendency term. The purpose of the present paper is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model, and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leapfrog scheme is suitable for use in semi-implicit integrations.
Resumo:
The transfer of hillslope water to and through the riparian zone forms a research area of importance in hydrological investigations. Numerical modelling schemes offer a way to visualise and quantify first-order controls on catchment runoff response and mixing. We use a two-dimensional Finite Element model to assess the link between model setup decisions (e.g. zero-flux boundary definitions, soil algorithm choice) and the consequential hydrological process behaviour. A detailed understanding of the consequences of model configuration is required in order to produce reliable estimates of state variables. We demonstrate that model configuration decisions can determine effectively the presence or absence of particular hillslope flow processes and, the magnitude and direction of flux at the hillslope–riparian interface. If these consequences are not fully explored for any given scheme and application, the resulting process inference may well be misleading.
Resumo:
Data are presented from the EISCAT (European Incoherent Scatter (Facility)) CP-3-E experiment which show large increases in the auroral zone convection velocities (>2 km s−1) over a wide range of latitudes. These are larger than the estimated neutral thermal speed and allow a study of the plasma in a nonthermal state over a range of observing angles. Spectra are presented which show a well-defined central peak, consistent with an ion velocity distribution function which significantly departs from a Maxwellian form. As the aspect angle decreases, the central peak becomes less obvious. Simulated spectra, derived using theoretical expressions for the O+ ion velocity distribution function based on the generalized relaxation collision model, are compared with the observations and show good first-order, qualitative agreement. It is shown that ion temperatures derived from the observations, with the assumption of a Maxwellian distribution function, are an overestimate of the true ion temperature at large aspect angles and an underestimate at low aspect angles. The theoretical distribution functions have been included in the “standard” incoherent scatter radar analysis procedure, and attempts have been made to derive realistic ionospheric parameters from nonthermal plasma observations. If the expressions for the distribution function are extended to include mixed ion composition, a significant improvement is found in fitting some of the observed spectra, and estimates of the ion composition can be made. The non-Maxwellian analysis of the data revealed that the spectral shape distortion parameter, D*, was significantly higher in this case for molecular ions than for atomic ions in a thin height slab roughly 40 km thick. This would seem unlikely if the main molecular ions present were NO+. We therefore suggest that N2+ formed a significant proportion of the molecular ions present during these observations.
Resumo:
Timediscretization in weatherandclimate modelsintroduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leap-frog integrations from first-order to fifth-order.This improvement is achieved by replacing the Robert–Asselin filter with the Robert–Asselin–Williams (RAW) filter and using a linear combination of unfiltered and filtered states to compute the tendency term. The purpose of the present article is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leap-frog scheme is suitable for use in semi-implicit integrations.
Resumo:
Cerrãdo savannas have the greatest fire activity of all major global land-cover types and play a significant role in the global carbon cycle. During the 21st century, temperatures are projected to increase by ∼ 3 ◦C coupled with a precipitation decrease of ∼ 20 %. Although these conditions could potentially intensify drought stress, it is unknown how that might alter vegetation composition and fire regimes. To assess how Neotropical savannas responded to past climate changes, a 14 500-year, high-resolution, sedimentary record from Huanchaca Mesetta, a palm swamp located in the cerrãdo savanna in northeastern Bolivia, was analyzed with phytoliths, stable isotopes, and charcoal. A nonanalogue, cold-adapted vegetation community dominated the Lateglacial–early Holocene period (14 500–9000 cal yr BP, which included trees and C3 Pooideae and C4 Panicoideae grasses. The Lateglacial vegetation was fire-sensitive and fire activity during this period was low, likely responding to fuel availability and limitation. Although similar vegetation characterized the early Holocene, the warming conditions associated with the onset of the Holocene led to an initial increase in fire activity. Huanchaca Mesetta became increasingly firedependent during the middle Holocene with the expansion of C4 fire-adapted grasses. However, as warm, dry conditions, characterized by increased length and severity of the dry season, continued, fuel availability decreased. The establishment of the modern palm swamp vegetation occurred at 5000 cal yr BP. Edaphic factors are the first-order control on vegetation on the rocky quartzite mesetta. Where soils are sufficiently thick, climate is the second-order control of vegetation on the mesetta. The presence of the modern palm swamp is attributed to two factors: (1) increased precipitation that increased water table levels and (2) decreased frequency and duration of surazos (cold wind incursions from Patagonia) leading to increased temperature minima. Natural (soil, climate, fire) drivers rather than anthropogenic drivers control the vegetation and fire activity at Huanchaca Mesetta. Thus the cerrãdo savanna ecosystem of the Huanchaca Plateau has exhibited ecosystem resilience to major climatic changes in both temperature and precipitation since the Lateglacial period.
Resumo:
A Hale cycle, one complete magnetic cycle of the Sun, spans two complete Schwabe cycles (also referred to as sunspot and, more generally, solar cycles). The approximately 22-year Hale cycle is seen in magnetic polarities of both sunspots and polar fields, as well as in the intensity of galactic cosmic rays reaching Earth, with odd- and even-numbered solar cycles displaying qualitatively different waveforms. Correct numbering of solar cycles also underpins empirical cycle-to-cycle relations which are used as first-order tests of stellar dynamo models. There has been much debate about whether the unusually long solar cycle 4 (SC4), spanning- 1784–1799, was actually two shorter solar cycles combined as a result of poor data coverage in the original Wolf sunspot number record. Indeed, the group sunspot number does show a small increase around 1794–1799 and there is evidence of an increase in the mean latitude of sunspots at this time, suggesting the existence of a cycle ‘‘4b’’. In this study, we use cosmogenic radionuclide data and associated reconstructions of the heliospheric magnetic field (HMF) to show that the Hale cycle has persisted over the last 300 years and that data prior to 1800 are more consistent with cycle 4 being a single long cycle (the ‘‘no SC4b’’ scenario). We also investigate the effect of cycle 4b on the HMF using an open solar flux (OSF) continuity model, in which the OSF source term is related to sunspot number and the OSF loss term is determined by the heliospheric current sheet tilt, assumed to be a simple function of solar cycle phase. The results are surprising; Without SC4b, the HMF shows two distinct peaks in the 1784–1799 interval, while the addition of SC4b removes the secondary peak, as the OSF loss term acts in opposition to the later rise in sunspot number. The timing and magnitude of the main SC4 HMF peak is also significantly changed by the addition of SC4b. These results are compared with the cosmogenic isotope reconstructions of HMF and historical aurora records. These data marginally favour the existence of SC4b (the ‘‘SC4b’’ scenario), though the result is less certain than that based on the persistence of the Hale cycle. Thus while the current uncertainties in the observations preclude any definitive conclusions, the data favour the ‘‘no SC4b’’ scenario. Future improvements to cosmogenic isotope reconstructions of the HMF, through either improved modelling or additional ice cores from well-separated geographic locations, may enable questions of the existence of SC4b and the phase of Hale cycle prior to the Maunder minimum to be settled conclusively.
Resumo:
Although estimation of turbulent transport parameters using inverse methods is not new, there is little evaluation of the method in the literature. Here, it is shown that extended observation of the broad scale hydrography by Argo provides a path to improved estimates of regional turbulent transport rates. Results from a 20 year ocean state estimate produced with the ECCO v4 non-linear inverse modeling framework provide supporting evidence. Turbulent transport parameter maps are estimated under the constraints of fitting the extensive collection of Argo profiles collected through 2011. The adjusted parameters dramatically reduce misfits to in situ profiles as compared with earlier ECCO solutions. They also yield a clear reduction in the model drift away from observations over multi-century long simulations, both for assimilated variables (temperature and salinity) and independent variables (bio-geochemical tracers). Despite the minimal constraints imposed specifically on the estimated parameters, their geography is physically plausible and exhibits close connections with the upper ocean ocean stratification as observed by Argo. The estimated parameter adjustments furthermore have first order impacts on upper-ocean stratification and mixed layer depths over 20 years. These results identify the constraint of fitting Argo profiles as an effective observational basis for regional turbulent transport rates. Uncertainties and further improvements of the method are discussed.
Resumo:
In order to move the nodes in a moving mesh method a time-stepping scheme is required which is ideally explicit and non-tangling (non-overtaking in one dimension (1-D)). Such a scheme is discussed in this paper, together with its drawbacks, and illustrated in 1-D in the context of a velocity-based Lagrangian conservation method applied to first order and second order examples which exhibit a regime change after node compression. An implementation in multidimensions is also described in some detail.
Resumo:
We study spectral properties of the Laplace-Beltrami operator on two relevant almost-Riemannian manifolds, namely the Grushin structures on the cylinder and on the sphere. This operator contains first order diverging terms caused by the divergence of the volume. We get explicit descriptions of the spectrum and the eigenfunctions. In particular in both cases we get a Weyl's law with leading term Elog E. We then study the drastic effect of Aharonov-Bohm magnetic potentials on the spectral properties. Other generalised Riemannian structures including conic and anti-conic type manifolds are also studied. In this case, the Aharonov-Bohm magnetic potential may affect the self-adjointness of the Laplace-Beltrami operator.
Resumo:
Background Access to, and the use of, information and communication technology (ICT) is increasingly becoming a vital component of mainstream life. First-order (e.g. time and money) and second-order factors (e.g. beliefs of staff members) affect the use of ICT in different contexts. It is timely to investigate what these factors may be in the context of service provision for adults with intellectual disabilities given the role ICT could play in facilitating communication and access to information and opportunities as suggested in Valuing People. Method Taking a qualitative approach, nine day service sites within one organization were visited over a period of 6 months to observe ICT-related practice and seek the views of staff members working with adults with intellectual disabilities. All day services were equipped with modern ICT equipment including computers, digital cameras, Internet connections and related peripherals. Results Staff members reported time, training and budget as significant first-order factors. Organizational culture and beliefs about the suitability of technology for older or less able service users were the striking second-order factors mentioned. Despite similar levels of equipment, support and training, ICT use had developed in very different ways across sites. Conclusion The provision of ICT equipment and training is not sufficient to ensure their use; the beliefs of staff members and organizational culture of sites play a substantial role in how ICT is used with and by service users. Activity theory provides a useful framework for considering how first- and second-order factors are related. Staff members need to be given clear information about the broader purpose of activities in day services, especially in relation to the lifelong learning agenda, in order to see the relevance and usefulness of ICT resources for all service users.