1000 resultados para Automatization, VI coding, calibration, hot wire anemometry
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
We have cloned the platelet collagen receptor glycoprotein (GP) VI from a human bone marrow cDNA library using rapid amplification of cDNA ends with platelet mRNA to complete the 5' end sequence. GPVI was isolated from platelets using affinity chromatography on the snake C-type lectin, convulxin, as a critical step. Internal peptide sequences were obtained, and degenerate primers were designed to amplify a fragment of the GPVI cDNA, which was then used as a probe to screen the library. Purified GPVI, as well as Fab fragments of polyclonal antibodies made against the receptor, inhibited collagen-induced platelet aggregation. The GPVI receptor cDNA has an open reading frame of 1017 base pairs coding for a protein of 339 amino acids including a putative 23-amino acid signal sequence and a 19-amino acid transmembrane domain between residues 247 and 265. GPVI belongs to the immunoglobulin superfamily, and its sequence is closely related to FcalphaR and to the natural killer receptors. Its extracellular chain has two Ig-C2-like domains formed by disulfide bridges. An arginine residue is found in position 3 of the transmembrane portion, which should permit association with Fcgamma and its immunoreceptor tyrosine-based activation motif via a salt bridge. With 51 amino acids, the cytoplasmic tail is relatively long and shows little homology to the C-terminal part of the other family members. The ability of the cloned GPVI cDNA to code for a functional platelet collagen receptor was demonstrated in the megakaryocytic cell line Dami. Dami cells transfected with GPVI cDNA mobilized intracellular Ca(2+) in response to collagen, unlike the nontransfected or mock transfected Dami cells, which do not respond to collagen.
Resumo:
Inbred strains of three species of fishes of the genus Xiphophorus (platyfish and swordtails) were crossed to produce intra- and interspecific F(,1) hybrids, which were then backcrossed to one or both parental stocks. Backcross hybrids were used for the analysis of segregation and linkage of 33 protein-coding loci (whose products were visualized by starch gel electrophoresis) and a sex-linked pigment pattern gene. Segregation was Mendelian for all loci with the exception of one instance of segregation distortion. Six linkage groups of enzyme-coding loci were established: LG I, ADA --6%-- G(,6)PD --24%-- 6PGD; LG II, Est-2 --27%-- Est-3 --0%-- Est-5 --23%-- LDH-1 --16%-- MPI; LG III, AcPh --38%-- G(,3)PD-1 (GUK-2 --14%-- G(,3)PD-1 is also in LG III, but the position of GUK-2 with respect to AcPh has not yet been determined); LG IV, GPI-1 --41%-- IDH-1; LG V, Est-1 --38%-- MDH-2; and LG VI, P1P --7%-- UMPK-1 (P1P is a plasma protein, very probably transferrin).^ Sex-specific recombination appeared absent in LG II and LG IV locus pairs; significantly higher male recombination was demonstrated in LG I but significantly higher female recombination was detected in LG V. Only one significant population-specific difference in recombination was detected, in the G(,6)PD - 6PGD region of LG I; the notable absence of such effects implies close correspondence of the genomes of the species used in the study. Two cases of possible evolutionary conservation of linkage groups in fishes and mammals were described, involving the G(,6)PD - 6PGD linkage in LG I and the cluster of esterase loci in LG II. One clear case of divergence was observed, that of the linkage of ADA in LG I. It was estimated that a minimum of (TURN)50% of the Xiphophorus genome was marked by the loci studied. Therefore, the prior probability that a new locus will assort independently from the markers already established is estimated to be less than 0.5. A maximum of 21 of the 24 pairs of chromosomes could be marked with at least one locus.^ Only the two LG V loci showed a significant association with a postulated gene controlling the severity of a genetically controlled melanoma caused by abnormal proliferation of macromelanophore pigment pattern cells. The independence of melanotic severity from all other informative markers implies that one or at most a few major genes are involved in control of melanotic severity in this system. ^
Resumo:
An accurate and efficient determination of the highly toxic Cr(VI) in solid materials is important to determine the total Cr(VI) inventory of contaminated sites and the Cr(VI) release potential from such sites into the environment. Most commonly, total Cr(VI) is extracted from solid materials following a hot alkaline extraction procedure (US EPA method 3060A) where a complete release of water-extractable and sparingly soluble Cr(VI) phase is achieved. This work presents an evaluation of matrix effects that may occur during the hot alkaline extraction and in the determination of the total Cr(VI) inventory of variably composed contaminated soils and industrial materials (cement, fly ash) and is compared to water-extractable Cr(VI) results. Method validation including multiple extractions and matrix spiking along with chemical and mineralogical characterization showed satisfying results for total Cr(VI) contents for most of the tested materials. However, unreliable results were obtained by applying method 3060A to anoxic soils due to the degradation of organic material and/or reactions with Fe2+-bearing mineral phases. In addition, in certain samples discrepant spike recoveries have to be also attributed to sample heterogeneity. Separation of possible extracted Cr(III) by applying cation-exchange cartridges prior to solution analysis further shows that under the hot alkaline extraction conditions only Cr(VI) is present in solution in measurable amounts, whereas Cr(III) gets precipitated as amorphous Cr(OH)3(am). It is concluded that prior to routine application of method 3060A to a new material type, spiking tests are recommended for the identification of matrix effects. In addition, the mass of extracted solid material should to be well adjusted to the heterogeneity of the Cr(VI) distribution in the material in question.
Resumo:
We update the TrES-4 system parameters using high-precision HARPS-N radial-velocity measurements and new photometric light curves. A combined spectroscopic and photometric analysis allows us to determine a spectroscopic orbit with a semi-amplitude K = 51 +/- 3 ms(-1). The derived mass of TrES-4b is found to be M-p = 0.49 +/- 0.04 M-Jup, significantly lower than previously reported. Combined with the large radius (R-p = 1.84(-0.09)(+0.08) R-Jup) inferred from our analysis, TrES-4b becomes the transiting hot Jupiter with the second-lowest density known. We discuss several scenarios to explain the puzzling discrepancy in the mass of TrES-4b in the context of the exotic class of highly inflated transiting giant planets.
Resumo:
Current nanometer technologies suffer within-die parameter uncertainties, varying workload conditions, aging, and temperature effects that cause a serious reduction on yield and performance. In this scenario, monitoring, calibration, and dynamic adaptation become essential, demanding systems with a collection of multi purpose monitors and exposing the need for light-weight monitoring networks. This paper presents a new monitoring network paradigm able to perform an early prioritization of the information. This is achieved by the introduction of a new hierarchy level, the threshing level. Targeting it, we propose a time-domain signaling scheme over a single-wire that minimizes the network switching activity as well as the routing requirements. To validate our approach, we make a thorough analysis of the architectural trade-offs and expose two complete monitoring systems that suppose an area improvement of 40% and a power reduction of three orders of magnitude compared to previous works.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
Double strand breaks (DSBs) have been found at several meiotic recombination hot spots in Saccharomyces cerevisiae; more global studies have found that they occur at many places along several yeast chromosomes during meiosis. Indeed, the number of breaks found is consistent with the number of recombination events predicted from the genetic map. We have previously demonstrated that the HIS2 gene is a recombination hot spot, exhibiting a high frequency of gene conversion and associated crossing over. This paper shows that DSBs occur in meiosis at a site in the coding region and at a site downstream of the HIS2 gene and that the DSBs are dependent upon genes required for recombination. The frequency of DSBs at HIS2 increases when the gene conversion frequency is increased by alterations in the DNA around HIS2, and vice versa. A deletion that increases both DSBs and conversion can stimulate both when heterozygous; that is, it is semidominant and acts to stimulate DSBs in trans. These data are consistent with the view that homologous chromosomes associate with each other before the formation of the DSBs.
Resumo:
Ullrich syndrome is a recessive congenital muscular dystrophy affecting connective tissue and muscle. The molecular basis is unknown. Reverse transcription–PCR amplification performed on RNA extracted from fibroblasts or muscle of three Ullrich patients followed by heteroduplex analysis displayed heteroduplexes in one of the three genes coding for collagen type VI (COL6). In patient A, we detected a homozygous insertion of a C leading to a premature termination codon in the triple-helical domain of COL6A2 mRNA. Both healthy consanguineous parents were carriers. In patient B, we found a deletion of 28 nucleotides because of an A → G substitution at nucleotide −2 of intron 17 causing the activation of a cryptic acceptor site inside exon 18. The second mutation was an exon skipping because of a G → A substitution at nucleotide −1 of intron 23. Both mutations are present in an affected brother. The first mutation is also present in the healthy mother, whereas the second mutation is carried by their healthy father. In patient C, we found only one mutation so far—the same deletion of 28 nucleotides found in patient B. In this case, it was a de novo mutation, as it is absent in her parents. mRNA and protein analysis of patient B showed very low amounts of COL6A2 mRNA and of COL6. A near total absence of COL6 was demonstrated by immunofluorescence in fibroblasts and muscle. Our results demonstrate that Ullrich syndrome is caused by recessive mutations leading to a severe reduction of COL6.
Resumo:
Using a scanning tunnelling microscope or mechanically controllable break junction it has been shown that it is possible to control the formation of a wire made of single gold atoms. In these experiments an interatomic distance between atoms in the chain of ∼3.6 Å was reported which is not consistent with recent theoretical calculations. Here, using precise calibration procedures for both techniques, we measure the length of the atomic chains. Based on the distance between the peaks observed in the chain length histogram we find the mean value of the interatomic distance before chain rupture to be 2.5±0.2 Å. This value agrees with the theoretical calculations for the bond length. The discrepancy with the previous experimental measurements was due to the presence of He gas, that was used to promote the thermal contact, and which affects the value of the work function that is commonly used to calibrate distances in scanning tunnelling microscopy and mechanically controllable break junctions at low temperatures.
Resumo:
Objective: To compare the incidence of ventilator-associated pneumonia (VAP) in patients ventilated in intensive care by means of circuits humidified with a hygroscopic heat-and-moisture exchanger with a bacterial viral filter (HME) or hot-water humidification with a heater wire in both inspiratory and expiratory circuit limbs (DHW) or the inspiratory limb only (SHW). Design: A prospective, randomized trial. Setting: A metropolitan teaching hospital's general intensive care unit. Patients: Three hundred eighty-one patients requiring a minimum period of mechanical ventilation of 48 hrs. Interventions: Patients were randomized to humidification with use of an HME (n = 190), SHW (n = 94), or DHW (n = 97). Measurements and Main Results. Study end points were VAP diagnosed on the basis of Clinical Pulmonary Infection Score (CPIS) (1), HME resistance after 24 hrs of use, endotracheal tube resistance, and HME use per patient. VAP occurred with similar frequency in all groups (13%, HME; 14%, DHW; 10%, SHW; p = 0.61) and was predicted only by current smoking (adjusted odds ratio [AOR], 2.1; 95% confidence interval [CI], 1.1-3.9; p =.03) and ventilation days (AOR, 1.05; 95% Cl, 1.0-1.2; p =.001); VAP was less likely for patients with an admission diagnosis of pneumonia (AOR, 0.40; 95% Cl, 0.4-0.2; p =.04). HME resistance after 24 hrs of use measured at a gas flow of 50 L/min was 0.9 cm H2O (0.4-2.9). Endotracheal tube resistance was similar for all three groups (16-19 cm H2O min/L; p =.2), as were suction frequency, secretion thickness, and blood on suctioning (p =.32, p =.06, and p =.34, respectively). The HME use per patient per day was 1.13. Conclusions: Humidification technique does not influence either VAP incidence or secretion characteristics, but HMEs may have air-flow resistance higher than manufacturer specifications after 24 hrs of use.
Resumo:
The processes that take place during the development of a heating are difficult to visualise. Bulk coal self-heating tests at The University of Queensland (UQ) using a two-metre column are providing graphic evidence of the stages that occur during a heating. Data obtained from these tests, both temperature and corresponding off-gas evolution can be transformed into what is effectively a video-replay of the heating event. This is achieved by loading both sets of data into a newly developed animation package called Hotspot. The resulting animation is ideal for spontaneous combustion training purposes as the viewer can readily identify the different hot spot stages and corresponding off-gas signatures. Colour coding of the coal temperature, as the hot spot forms, highlights its location in the coal pile and shows its ability to migrate upwind. An added benefit of the package is that once a mine has been tested in the UQ two-metre column, there is a permanent record of that particular coals performance for mine personnel to view.
Resumo:
A number of investigators have studied the application of oscillatory energy to a metal undergoing plastic deformation. Their results have shown that oscillatory stresses reduce both the stress required to initiate plastic deformation and the friction forces between the tool and workpiece. The first two sections in this thesis discuss historically and technically the devolopment of the use of oscillatory energy techniques to aid metal forming with particular reference to wire drawing. The remainder of the thesis discusses the research undertaken to study the effect of applying longitudinal oscillations to wire drawing. Oscillations were supplied from an electric hydraulic vibrator at frequencies in the range 25 to 500 c/s., and drawing tests were performed at drawing speeds up to 50 ft/m. on a 2000 lbf. bull-block. Equipment was designed to measure the drawing force, drawing torque, amplitude of die and drum oscillation and drawing speed. Reasons are given for selecting mild steel, pure and hard aluminium, stainless steel and hard copper as the materials to be drawn, and the experimental procedure and calibration of measuring equipment arc described. Results show that when oscillatory stresses are applied at frequencies within the range investigated : (a) There is no reduction in the maximum drawing load. (b) Using sodium stearate lubricant there is a negligible reduction in the coefficient of friction between the die and wire. (c) Pure aluminium does not absorb sufficient oscillatory energy to ease the movement of dislocations. (d) Hard aluminium is not softened by oscillatory energy accelerating the diffusion process. (e) Hard copper is not cyclically softened. A vibration analysis of the bull-block and wire showed that oscillatory drawiing in this frequency range, is a mechanical process of straining; and unstraining the drawn wire, and is dependent upon the stiffness of the material being drawn and the drawing machine. Directions which further work should take are suggested.
Resumo:
This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.