959 resultados para field methods
Resumo:
The Counterinsurgency Manual FM 3-24 has been accused of being over-dependent on the counterinsurgency 'classics' Galula and Thompson. But comparison reveals that it is different in spirit. Galula and Thompson seek practical control; the Manual seeks to build 'legitimacy'. Its concept of legitimacy is superficially Weberian, but owes more to the writings of the American Max Manwaring. The Manual presupposes that a rights-based legal order can (other things being equal) be made to be cross-culturally attractive; 'effective governance' by itself can build legitimacy. The fusion of its methods with an ideology creates unrealistic criteria for success. Its weaknesses suggest a level of incapacity to think politically that will, in time, result in further failures.
Resumo:
This paper presents findings of our study on peer-reviewed papers published in the International Conference on Persuasive Technology from 2006 to 2010. The study indicated that out of 44 systems reviewed, 23 were reported to be successful, 2 to be unsuccessful and 19 did not specify whether or not it was successful. 56 different techniques were mentioned and it was observed that most designers use ad hoc definitions for techniques or methods used in design. Hence we propose the need for research to establish unambiguous definitions of techniques and methods in the field.
Resumo:
We present a new iterative approach called Line Adaptation for the Singular Sources Objective (LASSO) to object or shape reconstruction based on the singular sources method (or probe method) for the reconstruction of scatterers from the far-field pattern of scattered acoustic or electromagnetic waves. The scheme is based on the construction of an indicator function given by the scattered field for incident point sources in its source point from the given far-field patterns for plane waves. The indicator function is then used to drive the contraction of a surface which surrounds the unknown scatterers. A stopping criterion for those parts of the surfaces that touch the unknown scatterers is formulated. A splitting approach for the contracting surfaces is formulated, such that scatterers consisting of several separate components can be reconstructed. Convergence of the scheme is shown, and its feasibility is demonstrated using a numerical study with several examples.
Resumo:
Aims Potatoes have an inadequate rooting system for efficient acquisition of water and minerals and use disproportionate amounts of irrigation and fertilizer. This research determines whether significant variation in rooting characteristics of potato exists, which characters correlate with final yield and whether a simple screen for rooting traits could be developed. Methods Twenty-eight genotypes of Solanum tuberosum groups Tuberosum and Phureja were grown in the field; eight replicate blocks to final harvest, while entire root systems were excavated from four blocks. Root classes were categorised and measured. The same measurements were made on these genotypes in the glasshouse, 2 weeks post emergence. Results In the field, total root length varied from 40 m to 112 m per plant. Final yield was correlated negatively with basal root specific root length and weakly but positively with total root weight. Solanum tuberosum group Phureja genotypes had more numerous roots and proportionally more basal than stolon roots compared with Solanum tuberosum, group Tuberosum genotypes. There were significant correlations between glasshouse and field measurements. Conclusions Our data demonstrate that variability in rooting traits amongst commercially available potato genotypes exists and a robust glasshouse screen has been developed. By measuring potato roots as described in this study, it is now possible to assess rooting traits of large populations of potato genotypes.
Resumo:
This paper explores the development of multi-feature classification techniques used to identify tremor-related characteristics in the Parkinsonian patient. Local field potentials were recorded from the subthalamic nucleus and the globus pallidus internus of eight Parkinsonian patients through the implanted electrodes of a Deep brain stimulation (DBS) device prior to device internalization. A range of signal processing techniques were evaluated with respect to their tremor detection capability and used as inputs in a multi-feature neural network classifier to identify the activity of Parkinsonian tremor. The results of this study show that a trained multi-feature neural network is able, under certain conditions, to achieve excellent detection accuracy on patients unseen during training. Overall the tremor detection accuracy was mixed, although an accuracy of over 86% was achieved in four out of the eight patients.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
We solve eight partial-differential, two-dimensional, nonlinear mean field equations, which describe the dynamics of large populations of cortical neurons. Linearized versions of these equations have been used to generate the strong resonances observed in the human EEG, in particular the α-rhythm (8–), with physiologically plausible parameters. We extend these results here by numerically solving the full equations on a cortex of realistic size, which receives appropriately “colored” noise as extra-cortical input. A brief summary of the numerical methods is provided. As an outlook to future applications, we explain how the effects of GABA-enhancing general anaesthetics can be simulated and present first results.
Resumo:
Anesthetic and analgesic agents act through a diverse range of pharmacological mechanisms. Existing empirical data clearly shows that such "microscopic" pharmacological diversity is reflected in their "macroscopic" effects on the human electroencephalogram (EEG). Based on a detailed mesoscopic neural field model we theoretically posit that anesthetic induced EEG activity is due to selective parametric changes in synaptic efficacy and dynamics. Specifically, on the basis of physiologically constrained modeling, it is speculated that the selective modification of inhibitory or excitatory synaptic activity may differentially effect the EEG spectrum. Such results emphasize the importance of neural field theories of brain electrical activity for elucidating the principles whereby pharmacological agents effect the EEG. Such insights will contribute to improved methods for monitoring depth of anesthesia using the EEG.
Resumo:
Building assessment methods have become a popular research field since the early 1990s. An international tool which allows the assessment of buildings in all regions, taking into account differences in climates, topographies and cultures does not yet exist. This paper aims to demonstrate the importance of criteria and sub-criteria in developing a new potential building assessment method for Saudi Arabia. Recently, the awareness of sustainability has been increasing in developing countries due to high energy consumption, pollution and high carbon foot print. There is no debate that assessment criteria have an important role to identify the tool’s orientation. However, various aspects influence the criteria and sub-criteria of assessment tools such as environment, economic, social and cultural to mention but a few. The author provides an investigation on the most popular and globally used schemes: BREEAM, LEED, Green Star, CASBEE and Estidama in order to identify the effectiveness of the different aspects of the assessment criteria and the impacts of these criteria on the assessment results; that will provide a solid foundation to develop an effective sustainable assessment method for buildings in Saudi Arabia. Initial results of the investigation suggest that each country needs to develop its own assessment method in order to achieve desired results, while focusing upon the indigenous environmental, economic, social and cultural conditions. Keywords: Assessment methods, BREEAM, LEED, Green Star, CASBEE, Estidama, sustainability, sustainable buildings, Environment, Saudi Arabia.
Resumo:
Traditionally functional magnetic resonance imaging (fMRI) has been used to map activity in the human brain by measuring increases in the Blood Oxygenation Level Dependent (BOLD) signal. Often accompanying positive BOLD fMRI signal changes are sustained negative signal changes. Previous studies investigating the neurovascular coupling mechanisms of the negative BOLD phenomenon have used concurrent 2D-optical imaging spectroscopy (2D-OIS) and electrophysiology (Boorman et al., 2010). These experiments suggested that the negative BOLD signal in response to whisker stimulation was a result of an increase in deoxy-haemoglobin and reduced multi-unit activity in the deep cortical layers. However, Boorman et al. (2010) did not measure the BOLD and haemodynamic response concurrently and so could not quantitatively compare either the spatial maps or the 2D-OIS and fMRI time series directly. Furthermore their study utilised a homogeneous tissue model in which is predominantly sensitive to haemodynamic changes in more superficial layers. Here we test whether the 2D-OIS technique is appropriate for studies of negative BOLD. We used concurrent fMRI with 2D-OIS techniques for the investigation of the haemodynamics underlying the negative BOLD at 7 Tesla. We investigated whether optical methods could be used to accurately map and measure the negative BOLD phenomenon by using 2D-OIS haemodynamic data to derive predictions from a biophysical model of BOLD signal changes. We showed that despite the deep cortical origin of the negative BOLD response, if an appropriate heterogeneous tissue model is used in the spectroscopic analysis then 2D-OIS can be used to investigate the negative BOLD phenomenon.
Resumo:
Results from all phases of the orbits of the Ulysses spacecraft have shown that the magnitude of the radial component of the heliospheric field is approximately independent of heliographic latitude. This result allows the use of near- Earth observations to compute the total open flux of the Sun. For example, using satellite observations of the interplanetary magnetic field, the average open solar flux was shown to have risen by 29% between 1963 and 1987 and using the aa geomagnetic index it was found to have doubled during the 20th century. It is therefore important to assess fully the accuracy of the result and to check that it applies to all phases of the solar cycle. The first perihelion pass of the Ulysses spacecraft was close to sunspot minimum, and recent data from the second perihelion pass show that the result also holds at solar maximum. The high level of correlation between the open flux derived from the various methods strongly supports the Ulysses discovery that the radial field component is independent of latitude. We show here that the errors introduced into open solar flux estimates by assuming that the heliospheric field’s radial component is independent of latitude are similar for the two passes and are of order 25% for daily values, falling to 5% for averaging timescales of 27 days or greater. We compare here the results of four methods for estimating the open solar flux with results from the first and second perehelion passes by Ulysses. We find that the errors are lowest (1–5% for averages over the entire perehelion passes lasting near 320 days), for near-Earth methods, based on either interplanetary magnetic field observations or the aa geomagnetic activity index. The corresponding errors for the Solanki et al. (2000) model are of the order of 9–15% and for the PFSS method, based on solar magnetograms, are of the order of 13–47%. The model of Solanki et al. is based on the continuity equation of open flux, and uses the sunspot number to quantify the rate of open flux emergence. It predicts that the average open solar flux has been decreasing since 1987, as Correspondence to: M. Lockwood (m.lockwood@rl.ac.uk) is observed in the variation of all the estimates of the open flux. This decline combines with the solar cycle variation to produce an open flux during the second (sunspot maximum) perihelion pass of Ulysses which is only slightly larger than that during the first (sunspot minimum) perihelion pass.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
Aims Potatoes are a globally important source of food whose production requires large inputs of fertiliser and water. Recent research has highlighted the importance of the root system in acquiring resources. Here measurements, previously generated by field phenotyping, tested the effect of root size on maintenance of yield under drought (drought tolerance). Methods Twelve potato genotypes, including genotypes with extremes of root size, were grown to maturity in the field under a rain shelter and either irrigated or subjected to drought. Soil moisture, canopy growth, carbon isotope discrimination and final yields were measured. Destructively harvested field phenotype data were used as explanatory variables in a general linear model (GLM) to investigate yield under conditions of drought or irrigation. Results Drought severely affected the small rooted genotype Pentland Dell but not the large rooted genotype Cara. More plantlets, longer and more numerous stolons and stolon roots were associated with drought tolerance. Previously measured carbon isotope discrimination did not correlate with the effect of drought. Conclusions These data suggest that in-field phenotyping can be used to identify useful characteristics when known genotypes are subjected to an environmental stress. Stolon root traits were associated with drought tolerance in potato and could be used to select genotypes with resilience to drought.
Resumo:
Elucidating the biological and biochemical roles of proteins, and subsequently determining their interacting partners, can be difficult and time consuming using in vitro and/or in vivo methods, and consequently the majority of newly sequenced proteins will have unknown structures and functions. However, in silico methods for predicting protein–ligand binding sites and protein biochemical functions offer an alternative practical solution. The characterisation of protein–ligand binding sites is essential for investigating new functional roles, which can impact the major biological research spheres of health, food, and energy security. In this review we discuss the role in silico methods play in 3D modelling of protein–ligand binding sites, along with their role in predicting biochemical functionality. In addition, we describe in detail some of the key alternative in silico prediction approaches that are available, as well as discussing the Critical Assessment of Techniques for Protein Structure Prediction (CASP) and the Continuous Automated Model EvaluatiOn (CAMEO) projects, and their impact on developments in the field. Furthermore, we discuss the importance of protein function prediction methods for tackling 21st century problems.