896 resultados para parabolic-elliptic equation, inverse problems, factorization method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: There is increasing evidence that a history of childhood abuse and neglect is not uncommon among individuals who experience mental disorder and that childhood trauma experiences are associated with adult psychopathology. Although several interview and self-report instruments for retrospective trauma assessment have been developed, many focus on sexual abuse (SexAb) rather than on multiple types of trauma or adversity. METHODS: Within the European Prediction of Psychosis Study, the Trauma and Distress Scale (TADS) was developed as a new self-report assessment of multiple types of childhood trauma and distressing experiences. The TADS includes 43 items and, following previous measures including the Childhood Trauma Questionnaire, focuses on five core domains: emotional neglect (EmoNeg), emotional abuse (EmoAb), physical neglect (PhyNeg), physical abuse (PhyAb), and SexAb.This study explores the psychometric properties of the TADS (internal consistency and concurrent validity) in 692 participants drawn from the general population who completed a mailed questionnaire, including the TADS, a depression self-report and questions on help-seeking for mental health problems. Inter-method reliability was examined in a random sample of 100 responders who were reassessed in telephone interviews. RESULTS: After minor revisions of PhyNeg and PhyAb, internal consistencies were good for TADS totals and the domain raw score sums. Intra-class coefficients for TADS total score and the five revised core domains were all good to excellent when compared to the interviewed TADS as a gold standard. In the concurrent validity analyses, the total TADS and its all core domains were significantly associated with depression and help-seeking for mental problems as proxy measures for traumatisation. In addition, robust cutoffs for the total TADS and its domains were calculated. CONCLUSIONS: Our results suggest the TADS as a valid, reliable, and clinically useful instrument for assessing retrospectively reported childhood traumatisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical relationships between physical properties determined non-destructively by core logging devices and calibrated by carbonate and opal measurements determined on discrete samples allow extraction of carbonate and opal records from the non-destructive measurements in biogenic settings. Contents of detrital material can be calculated as a residual. For carbonate and opal the correlation coefficients (r) are 0.954 and ?0.916 for sediment density, ?0.816 and 0.845 for compressional-wave velocity, 0.908 and ?0.942 for acoustic impedance, and 0.886 and ?0.865 for sediment color (lightness). Carbonate contents increase in concert with increasing density and acoustic impedance, decreasing velocity and lighter sediment color. The opposite is true for opal. The advantages of deriving the sediment composition quantitatively from core logging are: (i) sampling resolution is increased significantly, (ii) non-destructive data can be gathered rapidly, and (iii) laboratory work on discrete samples can be reduced. Applied to paleoceanographic problems, this method offers the opportunity of precise stratigraphic correlations and of studying processes related to biogenic sedimentation in more detail. Density is most promising because it is most strongly affected by changes in composition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the high dependence of photovoltaic energy efficiency on environmental conditions (temperature, irradiation...), it is quite important to perform some analysis focusing on the characteristics of photovoltaic devices in order to optimize energy production, even for small-scale users. The use of equivalent circuits is the preferred option to analyze solar cells/panels performance. However, the aforementioned small-scale users rarely have the equipment or expertise to perform large testing/calculation campaigns, the only information available for them being the manufacturer datasheet. The solution to this problem is the development of new and simple methods to define equivalent circuits able to reproduce the behavior of the panel for any working condition, from a very small amount of information. In the present work a direct and completely explicit method to extract solar cell parameters from the manufacturer datasheet is presented and tested. This method is based on analytical formulation which includes the use of the Lambert W-function to turn the series resistor equation explicit. The presented method is used to analyze commercial solar panel performance (i.e., the current-voltage–I-V–curve) at different levels of irradiation and temperature. The analysis performed is based only on the information included in the manufacturer’s datasheet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los modelos de simulación de cultivos permiten analizar varias combinaciones de laboreo-rotación y explorar escenarios de manejo. El modelo DSSAT fue evaluado bajo condiciones de secano en un experimento de campo de 16 años en la semiárida España central. Se evaluó el efecto del sistema de laboreo y las rotaciones basadas en cereales de invierno, en el rendimiento del cultivo y la calidad del suelo. Los modelos CERES y CROPGRO se utilizaron para simular el crecimiento y rendimiento del cultivo, mientras que el modelo DSSAT CENTURY se utilizó en las simulaciones de SOC y SN. Tanto las observaciones de campo como las simulaciones con CERES-Barley, mostraron que el rendimiento en grano de la cebada era mas bajo para el cereal continuo (BB) que para las rotaciones de veza (VB) y barbecho (FB) en ambos sistemas de laboreo. El modelo predijo más nitrógeno disponible en el laboreo convencional (CT) que en el no laboreo (NT) conduciendo a un mayor rendimiento en el CT. El SOC y el SN en la capa superficial del suelo, fueron mayores en NT que en CT, y disminuyeron con la profundidad en los valores tanto observados como simulados. Las mejores combinaciones para las condiciones de secano estudiadas fueron CT-VB y CT-FB, pero CT presentó menor contenido en SN y SOC que NT. El efecto beneficioso del NT en SOC y SN bajo condiciones Mediterráneas semiáridas puede ser identificado por observaciones de campo y por simulaciones de modelos de cultivos. La simulación del balance de agua en sistemas de cultivo es una herramienta útil para estudiar como el agua puede ser utilizado eficientemente. La comparación del balance de agua de DSSAT , con una simple aproximación “tipping bucket”, con el modelo WAVE más mecanicista, el cual integra la ecuación de Richard , es un potente método para valorar el funcionamiento del modelo. Los parámetros de suelo fueron calibrados usando el método de optimización global Simulated Annealing (SA). Un lisímetro continuo de pesada en suelo desnudo suministró los valores observados de drenaje y evapotranspiración (ET) mientras que el contenido de agua en el suelo (SW) fue suministrado por sensores de capacitancia. Ambos modelos funcionaron bien después de la optimización de los parámetros de suelo con SA, simulando el balance de agua en el suelo para el período de calibración. Para el período de validación, los modelos optimizados predijeron bien el contenido de agua en el suelo y la evaporación del suelo a lo largo del tiempo. Sin embargo, el drenaje fue predicho mejor con WAVE que con DSSAT, el cual presentó mayores errores en los valores acumulados. Esto podría ser debido a la naturaleza mecanicista de WAVE frente a la naturaleza más funcional de DSSAT. Los buenos resultados de WAVE indican que, después de la calibración, este puede ser utilizado como "benchmark" para otros modelos para periodos en los que no haya medidas de campo del drenaje. El funcionamiento de DSSAT-CENTURY en la simulación de SOC y N depende fuertemente del proceso de inicialización. Se propuso como método alternativo (Met.2) la inicialización de las fracciones de SOC a partir de medidas de mineralización aparente del suelo (Napmin). El Met.2 se comparó con el método de inicialización de Basso et al. (2011) (Met.1), aplicando ambos métodos a un experimento de campo de 4 años en un área en regadío de España central. Nmin y Napmin fueron sobreestimados con el Met.1, ya que la fracción estable obtenida (SOC3) en las capas superficiales del suelo fue más baja que con Met.2. El N lixiviado simulado fue similar en los dos métodos, con buenos resultados en los tratamientos de barbecho y cebada. El Met.1 subestimó el SOC en la capa superficial del suelo cuando se comparó con una serie observada de 12 años. El crecimiento y rendimiento del cultivo fueron adecuadamente simulados con ambos métodos, pero el N en la parte aérea de la planta y en el grano fueron sobreestimados con el Met.1. Los resultados variaron significativamente con las fracciones iniciales de SOC, resaltando la importancia del método de inicialización. El Met.2 ofrece una alternativa para la inicialización del modelo CENTURY, mejorando la simulación de procesos de N en el suelo. La continua emergencia de nuevas variedades de híbridos modernos de maíz limita la aplicación de modelos de simulación de cultivos, ya que estos nuevos híbridos necesitan ser calibrados en el campo para ser adecuados para su uso en los modelos. El desarrollo de relaciones basadas en la duración del ciclo, simplificaría los requerimientos de calibración facilitando la rápida incorporación de nuevos cultivares en DSSAT. Seis híbridos de maiz (FAO 300 hasta FAO 700) fueron cultivados en un experimento de campo de dos años en un área semiárida de regadío en España central. Los coeficientes genéticos fueron obtenidos secuencialmente, comenzando con los parámetros de desarrollo fenológico (P1, P2, P5 and PHINT), seguido de los parámetros de crecimiento del cultivo (G2 and G3). Se continuó el procedimiento hasta que la salida de las simulaciones estuvo en concordancia con las observaciones fenológicas de campo. Después de la calibración, los parámetros simulados se ajustaron bien a los parámetros observados, con bajos RMSE en todos los casos. Los P1 y P5 calibrados, incrementaron con la duración del ciclo. P1 fue una función lineal del tiempo térmico (TT) desde emergencia hasta floración y P5 estuvo linealmente relacionada con el TT desde floración a madurez. No hubo diferencias significativas en PHINT entre híbridos de FAO-500 a 700 , ya que tuvieron un número de hojas similar. Como los coeficientes fenológicos estuvieron directamente relacionados con la duración del ciclo, sería posible desarrollar rangos y correlaciones que permitan estimar dichos coeficientes a partir de la clasificación del ciclo. ABSTRACT Crop simulation models allow analyzing various tillage-rotation combinations and exploring management scenarios. DSSAT model was tested under rainfed conditions in a 16-year field experiment in semiarid central Spain. The effect of tillage system and winter cereal-based rotations on the crop yield and soil quality was evaluated. The CERES and CROPGRO models were used to simulate crop growth and yield, while the DSSAT CENTURY was used in the SOC and SN simulations. Both field observations and CERES-Barley simulations, showed that barley grain yield was lower for continuous cereal (BB) than for vetch (VB) and fallow (FB) rotations for both tillage systems. The model predicted higher nitrogen availability in the conventional tillage (CT) than in the no tillage (NT) leading to a higher yield in the CT. The SOC and SN in the top layer, were higher in NT than in CT, and decreased with depth in both simulated and observed values. The best combinations for the dry land conditions studied were CT-VB and CT-FB, but CT presented lower SN and SOC content than NT. The beneficial effect of NT on SOC and SN under semiarid Mediterranean conditions can be identified by field observations and by crop model simulations. The simulation of the water balance in cropping systems is a useful tool to study how water can be used efficiently. The comparison of DSSAT soil water balance, with a simpler “tipping bucket” approach, with the more mechanistic WAVE model, which integrates Richard’s equation, is a powerful method to assess model performance. The soil parameters were calibrated by using the Simulated Annealing (SA) global optimizing method. A continuous weighing lysimeter in a bare fallow provided the observed values of drainage and evapotranspiration (ET) while soil water content (SW) was supplied by capacitance sensors. Both models performed well after optimizing soil parameters with SA, simulating the soil water balance components for the calibrated period. For the validation period, the optimized models predicted well soil water content and soil evaporation over time. However, drainage was predicted better by WAVE than by DSSAT, which presented larger errors in the cumulative values. That could be due to the mechanistic nature of WAVE against the more functional nature of DSSAT. The good results from WAVE indicate that, after calibration, it could be used as benchmark for other models for periods when no drainage field measurements are available. The performance of DSSAT-CENTURY when simulating SOC and N strongly depends on the initialization process. Initialization of the SOC pools from apparent soil N mineralization (Napmin) measurements was proposed as alternative method (Met.2). Method 2 was compared to the Basso et al. (2011) initialization method (Met.1), by applying both methods to a 4-year field experiment in a irrigated area of central Spain. Nmin and Napmin were overestimated by Met.1, since the obtained stable pool (SOC3) in the upper layers was lower than from Met.2. Simulated N leaching was similar for both methods, with good results in fallow and barley treatments. Method 1 underestimated topsoil SOC when compared with a 12-year observed serial. Crop growth and yield were properly simulated by both methods, but N in shoots and grain were overestimated by Met.1. Results varied significantly with the initial SOC pools, highlighting the importance of the initialization procedure. Method 2 offers an alternative to initialize the CENTURY model, enhancing the simulation of soil N processes. The continuous emergence of new varieties of modern maize hybrids limits the application of crop simulation models, since these new hybrids should be calibrated in the field to be suitable for model use. The development of relationships based on the cycle duration, would simplify the calibration requirements facilitating the rapid incorporation of new cultivars into DSSAT. Six maize hybrids (FAO 300 through FAO 700) were grown in a 2-year field experiment in a semiarid irrigated area of central Spain. Genetic coefficients were obtained sequentially, starting with the phenological development parameters (P1, P2, P5 and PHINT), followed by the crop growth parameters (G2 and G3). The procedure was continued until the simulated outputs were in good agreement with the field phenological observations. After calibration, simulated parameters matched observed parameters well, with low RMSE in most cases. The calibrated P1 and P5 increased with the duration of the cycle. P1 was a linear function of the thermal time (TT) from emergence to silking and P5 was linearly related with the TT from silking to maturity . There were no significant differences in PHINT between hybrids from FAO-500 to 700 , as they had similar leaf number. Since phenological coefficients were directly related with the cycle duration, it would be possible to develop ranges and correlations which allow to estimate such coefficients from the cycle classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study non-negative radially symmetric solutions of a parabolic-elliptic Keller-Segel system. The system describes the chemotactic movement of cells under the additional circumstance that an external application of a chemo attractant at a distinguished point is introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous plankton recorder (CPR) survey is an upper layer plankton monitoring program that has regularly collected samples, at monthly intervals, in the North Atlantic and adjacent seas since 1946. Water from approximately 6 m depth enters the CPR through a small aperture at the front of the sampler and travels down a tunnel where it passes through a silk filtering mesh of 270 µm before exiting at the back of the CPR. The plankton filtered on the silk is analyzed in sections corresponding to 10 nautical miles (approx. 3 m**3 of seawater filtered) and the plankton microscopically identified (Richardson et al., 2006 and reference therein). In the present study we used the CPR data to investigate the current basin scale distribution of C. finmarchicus (C5-C6), C. helgolandicus (C5-C6), C. hyperboreus (C5-C6), Pseudocalanus spp. (C6), Oithona spp. (C1-C6), total Euphausiida, total Thecosomata and the presence/absence of Cnidaria and the Phytoplankton Colour Index (PCI). The PCI, which is a visual assessment of the greenness of the silk, is used as an indicator of the distribution of total phytoplankton biomass across the Atlantic basin (Batten et al., 2003). Monthly data collected between 2000 and 2009 were gridded using the inverse-distance interpolation method, in which the interpolated values were the nodes of a 2 degree by 2 degree grid. The resulting twelve monthly matrices were then averaged within the year and in the case of the zooplankton the data were log-transformed (i.e. log10 (x+1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report presents an overview of wave-current interacion, including comprehensive review of references to significant U.S. and foreign literature available through December 1981. Specific topics under review are the effects of horizontally and vertically varying currents on waves, wave refraction by currents, dissipation and turbulence, small- and medium-scale currents, caustics and focusing, and wave breaking. The results of the review are then examined for engineering applications. The most appropriate general-purpose computer program to include wave-current interaction is the Dutch Rijkswaterstaat program CREDIZ, which is based on a parabolic wave equation. Further applications include wave and current forces on structures and possibly sediment transport. The report concludes with a brief state-of-the-art review of wave-current interaction and a list of topics needing further research and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studiamo l'operatore di Ornstein-Uhlenbeck e il semigruppo di Ornstein-Uhlenbeck in un sottoinsieme aperto convesso $\Omega$ di uno spazio di Banach separabile $X$ dotato di una misura Gaussiana centrata non degnere $\gamma$. In particolare dimostriamo la disuguaglianza di Sobolev logaritmica e la disuguaglianza di Poincaré, e grazie a queste disuguaglianze deduciamo le proprietà spettrali dell'operatore di Ornstein-Uhlenbeck. Inoltre studiamo l'equazione ellittica $\lambdau+L^{\Omega}u=f$ in $\Omega$, dove $L^\Omega$ è l'operatore di Ornstein-Uhlenbeck. Dimostriamo che per $\lambda>0$ e $f\in L^2(\Omega,\gamma)$ la soluzione debole $u$ appartiene allo spazio di Sobolev $W^{2,2}(\Omega,\gamma)$. Inoltre dimostriamo che $u$ soddisfa la condizione di Neumann nel senso di tracce al bordo di $\Omega$. Questo viene fatto finita approssimazione dimensionale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so on-line learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics and adaptive critics. At each point estimate an auxiliary flat model (exponential family) is built to locally accommodate both the usual statistic (tangent to the model) and an ancillary statistic (normal to the model). The auxiliary model plays a role in determining credit assignment analogous to that played by an adaptive critic in solving temporal problems. The method is illustrated with the Cauchy model and the algorithm is proved to be asymptotically efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis is divided into two distinct sections. In the first, the functional neuroimaging technique of Magnetoencephalography (MEG) is described and a new technique is introduced for accurate combination of MEG and MRI co-ordinate systems. In the second part of this thesis, MEG and the analysis technique of SAM are used to investigate responses of the visual system in the context of functional specialisation within the visual cortex. In chapter one, the sources of MEG signals are described, followed by a brief description of the necessary instrumentation for accurate MEG recordings. This chapter is concluded by introducing the forward and inverse problems of MEG, techniques to solve the inverse problem, and a comparison of MEG with other neuroimaging techniques. Chapter two provides an important contribution to the field of research with MEG. Firstly, it is described how MEG and MRI co-ordinate systems are combined for localisation and visualisation of activated brain regions. A previously used co-registration methods is then described, and a new technique is introduced. In a series of experiments, it is demonstrated that using fixed fiducial points provides a considerable improvement in the accuracy and reliability of co-registration. Chapter three introduces the visual system starting from the retina and ending with the higher visual rates. The functions of the magnocellular and the parvocellular pathways are described and it is shown how the parallel visual pathways remain segregated throughout the visual system. The structural and functional organisation of the visual cortex is then described. Chapter four presents strong evidence in favour of the link between conscious experience and synchronised brain activity. The spatiotemporal responses of the visual cortex are measured in response to specific gratings. It is shown that stimuli that induce visual discomfort and visual illusions share their physical properties with those that induce highly synchronised gamma frequency oscillations in the primary visual cortex. Finally chapter five is concerned with localization of colour in the visual cortex. In this first ever use of Synthetic Aperture Magnetometry to investigate colour processing in the visual cortex, it is shown that in response to isoluminant chromatic gratings, the highest magnitude of cortical activity arise from area V2.