58 resultados para Computerized Derivation
Resumo:
In this paper, the available potential energy (APE) framework of Winters et al. (J. Fluid Mech., vol. 289, 1995, p. 115) is extended to the fully compressible Navier– Stokes equations, with the aims of clarifying (i) the nature of the energy conversions taking place in turbulent thermally stratified fluids; and (ii) the role of surface buoyancy fluxes in the Munk & Wunsch (Deep-Sea Res., vol. 45, 1998, p. 1977) constraint on the mechanical energy sources of stirring required to maintain diapycnal mixing in the oceans. The new framework reveals that the observed turbulent rate of increase in the background gravitational potential energy GPEr , commonly thought to occur at the expense of the diffusively dissipated APE, actually occurs at the expense of internal energy, as in the laminar case. The APE dissipated by molecular diffusion, on the other hand, is found to be converted into internal energy (IE), similar to the viscously dissipated kinetic energy KE. Turbulent stirring, therefore, does not introduce a new APE/GPEr mechanical-to-mechanical energy conversion, but simply enhances the existing IE/GPEr conversion rate, in addition to enhancing the viscous dissipation and the entropy production rates. This, in turn, implies that molecular diffusion contributes to the dissipation of the available mechanical energy ME =APE +KE, along with viscous dissipation. This result has important implications for the interpretation of the concepts of mixing efficiency γmixing and flux Richardson number Rf , for which new physically based definitions are proposed and contrasted with previous definitions. The new framework allows for a more rigorous and general re-derivation from the first principles of Munk & Wunsch (1998, hereafter MW98)’s constraint, also valid for a non-Boussinesq ocean: G(KE) ≈ 1 − ξ Rf ξ Rf Wr, forcing = 1 + (1 − ξ )γmixing ξ γmixing Wr, forcing , where G(KE) is the work rate done by the mechanical forcing, Wr, forcing is the rate of loss of GPEr due to high-latitude cooling and ξ is a nonlinearity parameter such that ξ =1 for a linear equation of state (as considered by MW98), but ξ <1 otherwise. The most important result is that G(APE), the work rate done by the surface buoyancy fluxes, must be numerically as large as Wr, forcing and, therefore, as important as the mechanical forcing in stirring and driving the oceans. As a consequence, the overall mixing efficiency of the oceans is likely to be larger than the value γmixing =0.2 presently used, thereby possibly eliminating the apparent shortfall in mechanical stirring energy that results from using γmixing =0.2 in the above formula.
Resumo:
This paper brings together some of the recent research on trace metals in dredged sediments, and in particular freshwater canal sediments. Following a description of the general UK background, geochemical processes that affect metal release and retention in dredged canal sediments are considered, particularly the role of redox and sulphur on metal associations, and the use of sequential extraction for the derivation of metal associations in sediments. The review outlines the importance of oxidation on metal-mobility and shows that many studies have illustrated the increase in metal-leachability from sediments during oxidation. Suggestions are given for sediment-testing requirements which should include an examination of both anoxic and oxidised sediment as well as ecotoxicology in order to account for changes to metal-speciation after disposal to land.
Resumo:
The resolution of remotely sensed data is becoming increasingly fine, and there are now many sources of data with a pixel size of 1 m x 1 m. This produces huge amounts of data that have to be stored, processed and transmitted. For environmental applications this resolution possibly provides far more data than are needed: data overload. This poses the question: how much is too much? We have explored two resolutions of data-20 in pixel SPOT data and I in pixel Computerized Airborne Multispectral Imaging System (CAMIS) data from Fort A. P. Hill (Virginia, USA), using the variogram of geostatistics. For both we used the normalized difference vegetation index (NDVI). Three scales of spatial variation were identified in both the SPOT and 1 in data: there was some overlap at the intermediate spatial scales of about 150 in and of 500 m-600 in. We subsampled the I in data and scales of variation of about 30 in and of 300 in were identified consistently until the separation between pixel centroids was 15 in (or 1 in 225pixels). At this stage, spatial scales of about 100m and 600m were described, which suggested that only now was there a real difference in the amount of spatial information available from an environmental perspective. These latter were similar spatial scales to those identified from the SPOT image. We have also analysed I in CAMIS data from Fort Story (Virginia, USA) for comparison and the outcome is similar.:From these analyses it seems that a pixel size of 20m is adequate for many environmental applications, and that if more detail is required the higher resolution data could be sub-sampled to a 10m separation between pixel centroids without any serious loss of information. This reduces significantly the amount of data that needs to be stored, transmitted and analysed and has important implications for data compression.
Resumo:
The principles of operation of an experimental prototype instrument known as J-SCAN are described along with the derivation of formulae for the rapid calculation of normalized impedances; the structure of the instrument; relevant probe design parameters; digital quantization errors; and approaches for the optimization of single frequency operation. An eddy current probe is used As the inductance element of a passive tuned-circuit which is repeatedly excited with short impulses. Each impulse excites an oscillation which is subject to decay dependent upon the values of the tuned-circuit components: resistance, inductance and capacitance. Changing conditions under the probe that affect the resistance and inductance of this circuit will thus be detected through changes in the transient response. These changes in transient response, oscillation frequency and rate of decay, are digitized, and then normalized values for probe resistance and inductance changes are calculated immediately in a micro processor. This approach coupled with a minimum analogue processing and maximum of digital processing has advantages compared with the conventional approaches to eddy current instruments. In particular there are: the absence of an out of balance condition and the flexibility and stability of digital data processing.
Resumo:
Cybersecurity is a complex challenge that has emerged alongside the evolving global socio-technical environment of social networks that feature connectivity across time and space in ways unimaginable even a decade ago. This paper reports on the preliminary findings of a NATO funded project that investigates the nature of innovation in open collaborative communities and its implications for cyber security. In this paper, the authors describe the framing of relevant issues, the articulation of the research questions, and the derivation of a conceptual framework based on open collaborative innovation that has emerged from preliminary field research in Russia and the UK.
Resumo:
Copyright protects the rights and interests of authors on their original works of authorship such as literary, dramatic, musical, artistic, and certain other intellectual works including architectural works and designs. It is automatic once a tangible medium of expression in any form of an innovative material, which conforms the Copyright Designs and Patents Act 1988 (CDPA 1988), is created. This includes the building, the architectural plans and drawings. There is no official copyright registry, no requirements on any fees need to be paid and they can be published or unpublished materials. Copyrights owners have the rights to control the reproduction, display, publication, and even derivation of the design. However, there are limitations on the rights of the copyright owners concerning copyrights infringements. Infringement of copyright is an unauthorised violation of the exclusive rights of the copyright author. Architects and engineers depend on copyright law to protect their works and design. Copyrights are protected on the arrangements of spaces and elements as well as the overall form of the architectural design. However, it does not cover the design of functional elements and standard features. Although copyright law provides automatic protection to all original architectural plans, the limitation is that copyright only protects the expression of ideas but not the ideas themselves. It can be argued that architectural drawings and design, including models are recognised categories of artistic works which are protected under the copyright law. This research investigates to what extent copyrights protect the rights and interests of the designers on architectural works and design.
Resumo:
Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.
Resumo:
This paper describes experimental studies aimed at elucidating mechanisms for the formation of low-volatility organic acids in the gas-phase ozonolysis of 3-carene. Experiments were carried out in a static chamber under 'OH-free' conditions. A range of multifunctional acids-which are analogous to those observed from alpha-pinene ozonolysis-were identified in the condensed phase using gas chromatography coupled to mass spectrometry after derivation. Product yields were determined as a function of different OH radical scavengers and relative humidities to give mechanistic information about their routes of formation. Furthermore, an enone and an enal derived from 3-carene were ozonised in order to probe the early mechanistic steps in the reaction and, in particular, which of the two initially formed Criegee intermediates gives rise to which products. Branching ratios for the formation of the two Criegee Intermediates are determined. Similarities and differences in product formation from 3-carene and alpha-pinene ozonolysis are discussed and possible mechanisms-supported by experimental evidence-are developed for all acids investigated.
Resumo:
The theory of dipole-allowed absorption intensities in triatomic molecules is presented for systems with three close-lying electronic states of doublet multiplicity. Its derivation is within the framework of a recently developed variational method [CARTER, S., HANDY, N. C., PUZZARINI, C., TARRONI, R., and PALMIERI, P., 2000, Molec. Phys., 98,1967]. The method has been applied to the calculation of the infrared absorption spectrum of the C2H radical and its deuterated isotopomer for energies up to 10000 cm(-1) above the ground state, using highly accurate ab initio diabatic potential energy and dipole moment surfaces. The calculated spectra agree very well with those recorded experimentally in a neon matrix [FORNEY, D., JACOX, M. E., and THOMPSON, W. E., 1995, J. molee. Spectrosc., 170, 178] and assignments in the high energy region of the IR spectra are proposed for the first time.
Resumo:
In an attempt to focus clients' minds on the importance of considering the construction and maintenance costs of a commercial office building (both as a factor in staff productivity and as a fraction of lifetime staff costs) there is an often-quoted ratio of costs of 1:5:200, where for every one pound spent on construction cost, five are spent on maintenance and building operating costs and 200 on staffing and business operating costs. This seems to stem from a paper published by the Royal Academy of Engineering, in which no data is given and no derivation or defence of the ratio appears. The accompanying belief that higher quality design and construction increases staff productivity, and simultaneously reduces maintenance costs, how ever laudable, appears unsupported by research, and carries all the hallmarks of an "urban myth". In tracking down data about real buildings, a more realistic ratio appears to depend on a huge variety of variables, as well as the definition of the number of "lifetime" years. The ill-defined origins of the original ratio (1:5:200) describing these variables have made replication impossible. However, by using published sources of data, we have found that for three office buildings, a more realistic ratio is 1:0.4:12. As there is nothing in the public domain about what comprised the original research that gave rise to 1:5:200, it is not possible to make a true comparison between these new calculations and the originals. Clients and construction professionals stand to be misled because the popularity and widespread use of the wrong ratio appears to be mis-informing important investment and policy decisions.
Resumo:
Background A significant proportion of women who are vulnerable to postnatal depression refuse to engage in treatment programmes. Little is known about them, other than some general demographic characteristics. In particular, their access to health care and their own and their infants' health outcomes are uncharted. Methods We conducted a nested cohort case-control study, using data from computerized health systems, and general practitioner (GP) and maternity records, to identify the characteristics, health service contacts, and maternal and infant health outcomes for primiparous antenatal clinic attenders at high risk for postnatal depression who either refused (self-exclusion group) or else agreed (take-up group) to receive additional Health Visiting support in pregnancy and the first 2 months postpartum. Results Women excluding themselves from Health Visitor support were younger and less highly educated than women willing to take up the support. They were less likely to attend midwifery, GP and routine Health Visitor appointments, but were more likely to book in late and to attend accident and emergency department (A&E). Their infants had poorer outcome in terms of gestation, birthweight and breastfeeding. Differences between the groups still obtained when age and education were taken into account for midwifery contacts, A&E attendance and gestation;the difference in the initiation of breast feeding was attenuated, but not wholly explained, by age and education. Conclusion A subgroup of psychologically vulnerable childbearing women are at particular risk for poor access to health care and adverse infant outcome. Barriers to take-up of services need to be understood in order better to deliver care.
Resumo:
In this paper, an improved stochastic discrimination (SD) is introduced to reduce the error rate of the standard SD in the context of multi-class classification problem. The learning procedure of the improved SD consists of two stages. In the first stage, a standard SD, but with shorter learning period is carried out to identify an important space where all the misclassified samples are located. In the second stage, the standard SD is modified by (i) restricting sampling in the important space; and (ii) introducing a new discriminant function for samples in the important space. It is shown by mathematical derivation that the new discriminant function has the same mean, but smaller variance than that of standard SD for samples in the important space. It is also analyzed that the smaller the variance of the discriminant function, the lower the error rate of the classifier. Consequently, the proposed improved SD improves standard SD by its capability of achieving higher classification accuracy. Illustrative examples axe provided to demonstrate the effectiveness of the proposed improved SD.
Resumo:
This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
An analysis of Stochastic Diffusion Search (SDS), a novel and efficient optimisation and search algorithm, is presented, resulting in a derivation of the minimum acceptable match resulting in a stable convergence within a noisy search space. The applicability of SDS can therefore be assessed for a given problem.