974 resultados para Multiple attenuation. Deconvolution. Seismic processing
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
The present study has been carried out with the following objectives: i) To investigate the attributes of source parameters of local and regional earthquakes; ii) To estimate, as accurately as possible, M0, fc, Δσ and their standard errors to infer their relationship with source size; iii) To quantify high-frequency earthquake ground motion and to study the source scaling. This work is based on observational data of micro, small and moderate -earthquakes for three selected seismic sequences, namely Parkfield (CA, USA), Maule (Chile) and Ferrara (Italy). For the Parkfield seismic sequence (CA), a data set of 757 (42 clusters) repeating micro-earthquakes (0 ≤ MW ≤ 2), collected using borehole High Resolution Seismic Network (HRSN), have been analyzed and interpreted. We used the coda methodology to compute spectral ratios to obtain accurate values of fc , Δσ, and M0 for three target clusters (San Francisco, Los Angeles, and Hawaii) of our data. We also performed a general regression on peak ground velocities to obtain reliable seismic spectra of all earthquakes. For the Maule seismic sequence, a data set of 172 aftershocks of the 2010 MW 8.8 earthquake (3.7 ≤ MW ≤ 6.2), recorded by more than 100 temporary broadband stations, have been analyzed and interpreted to quantify high-frequency earthquake ground motion in this subduction zone. We completely calibrated the excitation and attenuation of the ground motion in Central Chile. For the Ferrara sequence, we calculated moment tensor solutions for 20 events from MW 5.63 (the largest main event occurred on May 20 2012), down to MW 3.2 by a 1-D velocity model for the crust beneath the Pianura Padana, using all the geophysical and geological information available for the area. The PADANIA model allowed a numerical study on the characteristics of the ground motion in the thick sediments of the flood plain.
Towards the 3D attenuation imaging of active volcanoes: methods and tests on real and simulated data
Resumo:
The purpose of my PhD thesis has been to face the issue of retrieving a three dimensional attenuation model in volcanic areas. To this purpose, I first elaborated a robust strategy for the analysis of seismic data. This was done by performing several synthetic tests to assess the applicability of spectral ratio method to our purposes. The results of the tests allowed us to conclude that: 1) spectral ratio method gives reliable differential attenuation (dt*) measurements in smooth velocity models; 2) short signal time window has to be chosen to perform spectral analysis; 3) the frequency range over which to compute spectral ratios greatly affects dt* measurements. Furthermore, a refined approach for the application of spectral ratio method has been developed and tested. Through this procedure, the effects caused by heterogeneities of propagation medium on the seismic signals may be removed. The tested data analysis technique was applied to the real active seismic SERAPIS database. It provided a dataset of dt* measurements which was used to obtain a three dimensional attenuation model of the shallowest part of Campi Flegrei caldera. Then, a linearized, iterative, damped attenuation tomography technique has been tested and applied to the selected dataset. The tomography, with a resolution of 0.5 km in the horizontal directions and 0.25 km in the vertical direction, allowed to image important features in the off-shore part of Campi Flegrei caldera. High QP bodies are immersed in a high attenuation body (Qp=30). The latter is well correlated with low Vp and high Vp/Vs values and it is interpreted as a saturated marine and volcanic sediments layer. High Qp anomalies, instead, are interpreted as the effects either of cooled lava bodies or of a CO2 reservoir. A pseudo-circular high Qp anomaly was detected and interpreted as the buried rim of NYT caldera.
Resumo:
Opportunistic diseases caused by Human Immunodeficiency Virus (HIV) and Hepatitis B Virus (HBV) is an omnipresent global challenge. In order to manage these epidemics, we need to have low cost and easily deployable platforms at the point-of-care in high congestions regions like airports and public transit systems. In this dissertation we present our findings in using Localized Surface Plasmon Resonance (LSPR)-based detection of pathogens and other clinically relevant applications using microfluidic platforms at the point-of-care setting in resource constrained environment. The work presented here adopts the novel technique of LSPR to multiplex a lab-on-a-chip device capable of quantitatively detecting various types of intact viruses and its various subtypes, based on the principle of a change in wavelength occurring when metal nano-particle surface is modified with a specific surface chemistry allowing the binding of a desired pathogen to a specific antibody. We demonstrate the ability to detect and quantify subtype A, B, C, D, E, G and panel HIV with a specificity of down to 100 copies/mL using both whole blood sample and HIV-patient blood sample discarded from clinics. These results were compared against the gold standard Reverse Transcriptase Polymerase Chain Reaction (RT-qPCR). This microfluidic device has a total evaluation time for the assays of about 70 minutes, where 60 minutes is needed for the capture and 10 minutes for data acquisition and processing. This LOC platform eliminates the need for any sample preparation before processing. This platform is highly multiplexable as the same surface chemistry can be adapted to capture and detect several other pathogens like dengue virus, E. coli, M. Tuberculosis, etc.
Resumo:
Glioblastoma multiforme (GBM) is the most common and most aggressive astrocytic tumor of the central nervous system (CNS) in adults. The standard treatment consisting of surgery, followed by a combinatorial radio- and chemotherapy, is only palliative and prolongs patient median survival to 12 to 15 months. The tumor subpopulation of stem cell-like glioma-initiating cells (GICs) shows resistance against radiation as well as chemotherapy, and has been suggested to be responsible for relapses of more aggressive tumors after therapy. The efficacy of immunotherapies, which exploit the immune system to specifically recognize and eliminate malignant cells, is limited due to strong immunosuppressive activities of the GICs and the generation of a specialized protective microenvironment. The molecular mechanisms underlying the therapy resistance of GICs are largely unknown. rnThe first aim of this study was to identify immune evasion mechanisms in GICs triggered by radiation. A model was used in which patient-derived GICs were treated in vitro with fractionated ionizing radiation (2.5 Gy in 7 consecutive passages) to select for a more radio-resistant phenotype. In the model cell line 1080, this selection process resulted in increased proliferative but diminished migratory capacities in comparison to untreated control GICs. Furthermore, radio-selected GICs downregulated various proteins involved in antigen processing and presentation, resulting in decreased expression of MHC class I molecules on the cellular surface and diminished recognition potential by cytotoxic CD8+ T cells. Thus, sub-lethal fractionated radiation can promote immune evasion and hamper the success of adjuvant immunotherapy. Among several immune-associated proteins, interferon-induced transmembrane protein 3 (IFITM3) was found to be upregulated in radio-selected GICs. While high expression of IFITM3 was associated with a worse overall survival of GBM patients (TCGA database) and increased proliferation and migration of differentiated glioma cell lines, a strong contribution of IFITM3 to proliferation in vitro as well as tumor growth and invasiveness in a xenograft model could not be observed. rnMultiple sclerosis (MS) is the most common autoimmune disease of the CNS in young adults of the Western World, which leads to progressive disability in genetically susceptible individuals, possibly triggered by environmental factors. It is assumed that self-reactive, myelin-specific T helper cell 1 (Th1) and Th17 cells, which have escaped the control mechanisms of the immune system, are critical in the pathogenesis of the human disease and its animal model experimental autoimmune encephalomyelitis (EAE). It was observed that in vitro differentiated interleukin 17 (IL-17) producing Th17 cells co-expressed the Th1-phenotypic cytokine Interferon-gamma (IFN-γ) in combination with the two respective lineage-associated transcription factors RORγt and T-bet after re-isolation from the CNS of diseased mice. Pathogenic molecular mechanisms that render a CD4+ T cell encephalitogenic have scarcely been investigated up to date. rnIn the second part of the thesis, whole transcriptional changes occurring in in vitro differentiated Th17 cells in the course of EAE were analyzed. Evaluation of signaling networks revealed an overrepresentation of genes involved in communication between the innate and adaptive immune system and metabolic alterations including cholesterol biosynthesis. The transcription factors Cebpa, Fos, Klf4, Nfatc1 and Spi1, associated with thymocyte development and naïve T cells were upregulated in encephalitogenic CNS-isolated CD4+ T cells, proposing a contribution to T cell plasticity. Correlation of the murine T-cell gene expression dataset to putative MS risk genes, which were selected based on their proximity (± 500 kb; ensembl database, release 75) to the MS risk single nucleotide polymorphisms (SNPs) proposed by the most recent multiple sclerosis GWAS in 2011, revealed that 67.3% of the MS risk genes were differentially expressed in EAE. Expression patterns of Bach2, Il2ra, Irf8, Mertk, Odf3b, Plek, Rgs1, Slc30a7, and Thada were confirmed in independent experiments, suggesting a contribution to T cell pathogenicity. Functional analysis of Nfatc1 revealed that Nfatc1-deficient CD4+ T cells were restrained in their ability to induce clinical signs of EAE. Nfatc1-deficiency allowed proper T cell activation, but diminished their potential to fully differentiate into Th17 cells and to express high amounts of lineage cytokines. As the inducible Nfatc1/αA transcript is distinct from the other family members, it could represent an interesting target for therapeutic intervention in MS.rn
Resumo:
Multiple sclerosis (MS) causes a broad range of neurological symptoms. Most common is poor balance control. However, knowledge of deficient balance control in mildly affected MS patients who are complaining of balance impairment but have normal clinical balance tests (CBT) is limited. This knowledge might provide insights into the normal and pathophysiological mechanisms underlying stance and gait. We analysed differences in trunk sway between mildly disabled MS patients with and without subjective balance impairment (SBI), all with normal CBT. The sway was measured for a battery of stance and gait balance tests (static and dynamic posturography) and compared to that of age- and sex-matched healthy subjects. Eight of 21 patients (38%) with an Expanded Disability Status Scale of 1.0-3.0 complained of SBI during daily activities. For standing on both legs with eyes closed on a normal and on a foam surface, patients in the no SBI group showed significant differences in the range of trunk roll (lateral) sway angle and velocity, compared to normal persons. Patients in the SBI group had significantly greater lateral sway than the no SBI group, and sway was also greater than normal in the pitch (anterior-posterior) direction. Sway for one-legged stance on foam was also greater in the SBI group compared to the no SBI and normal groups. We found a specific laterally directed impairment of balance in all patients, consistent with a deficit in proprioceptive processing, which was greater in the SBI group than in the no SBI group. This finding most likely explains the subjective symptoms of imbalance in patients with MS with normal CBT.
Resumo:
Investigates multiple processing parameters, includingpolymer type, filler type, processing technique, severity of SSSP (Solid-state shear pulverization)processing, and postprocessing, of SSSP. HDPE and LLDPE polymers with pristine clay and organo-clay samples are explored. Effects on crystallization, high-temperature behavior, mechanicalproperties, and gas barrier properties are examined. Thermal, mechanical, and morphological characterization is conducted to determine polymer/filler compatibility and superior processing methods for the polymer-clay nanocomposites.
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.
Resumo:
Zeki and co-workers recently proposed that perception can best be described as locally distributed, asynchronous processes that each create a kind of microconsciousness, which condense into an experienced percept. The present article is aimed at extending this theory to metacognitive feelings. We present evidence that perceptual fluency-the subjective feeling of ease during perceptual processing-is based on speed of processing at different stages of the perceptual process. Specifically, detection of briefly presented stimuli was influenced by figure-ground contrast, but not by symmetry (Experiment 1) or the font (Experiment 2) of the stimuli. Conversely, discrimination of these stimuli was influenced by whether they were symmetric (Experiment 1) and by the font they were presented in (Experiment 2), but not by figure-ground contrast. Both tasks however were related with the subjective experience of fluency (Experiments 1 and 2). We conclude that subjective fluency is the conscious phenomenal correlate of different processing stages in visual perception.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.
Resumo:
One of the original ocean-bottom time-lapse seismic studies was performed at the Teal South oil field in the Gulf of Mexico during the late 1990’s. This work reexamines some aspects of previous work using modern analysis techniques to provide improved quantitative interpretations. Using three-dimensional volume visualization of legacy data and the two phases of post-production time-lapse data, I provide additional insight into the fluid migration pathways and the pressure communication between different reservoirs, separated by faults. This work supports a conclusion from previous studies that production from one reservoir caused regional pressure decline that in turn resulted in liberation of gas from multiple surrounding unproduced reservoirs. I also provide an explanation for unusual time-lapse changes in amplitude-versus-offset (AVO) data related to the compaction of the producing reservoir which, in turn, changed an isotropic medium to an anisotropic medium. In the first part of this work, I examine regional changes in seismic response due to the production of oil and gas from one reservoir. The previous studies primarily used two post-production ocean-bottom surveys (Phase I and Phase II), and not the legacy streamer data, due to the unavailability of legacy prestack data and very different acquisition parameters. In order to incorporate the legacy data in the present study, all three poststack data sets were cross-equalized and examined using instantaneous amplitude and energy volumes. This approach appears quite effective and helps to suppress changes unrelated to production while emphasizing those large-amplitude changes that are related to production in this noisy (by current standards) suite of data. I examine the multiple data sets first by using the instantaneous amplitude and energy attributes, and then also examine specific apparent time-lapse changes through direct comparisons of seismic traces. In so doing, I identify time-delays that, when corrected for, indicate water encroachment at the base of the producing reservoir. I also identify specific sites of leakage from various unproduced reservoirs, the result of regional pressure blowdown as explained in previous studies; those earlier studies, however, were unable to identify direct evidence of fluid movement. Of particular interest is the identification of one site where oil apparently leaked from one reservoir into a “new” reservoir that did not originally contain oil, but was ideally suited as a trap for fluids leaking from the neighboring spill-point. With continued pressure drop, oil in the new reservoir increased as more oil entered into the reservoir and expanded, liberating gas from solution. Because of the limited volume available for oil and gas in that temporary trap, oil and gas also escaped from it into the surrounding formation. I also note that some of the reservoirs demonstrate time-lapse changes only in the “gas cap” and not in the oil zone, even though gas must be coming out of solution everywhere in the reservoir. This is explained by interplay between pore-fluid modulus reduction by gas saturation decrease and dry-frame modulus increase by frame stiffening. In the second part of this work, I examine various rock-physics models in an attempt to quantitatively account for frame-stiffening that results from reduced pore-fluid pressure in the producing reservoir, searching for a model that would predict the unusual AVO features observed in the time-lapse prestack and stacked data at Teal South. While several rock-physics models are successful at predicting the time-lapse response for initial production, most fail to match the observations for continued production between Phase I and Phase II. Because the reservoir was initially overpressured and unconsolidated, reservoir compaction was likely significant, and is probably accomplished largely by uniaxial strain in the vertical direction; this implies that an anisotropic model may be required. Using Walton’s model for anisotropic unconsolidated sand, I successfully model the time-lapse changes for all phases of production. This observation may be of interest for application to other unconsolidated overpressured reservoirs under production.
Resumo:
The single Hochdorf burial was found in 1887 during construction work in the Canton of Lucerne, Switzerland. It dates from between 320 and 250 BC. The calvarium, the left half of the pelvis and the left femur were preserved. The finding shows an unusual bony alteration of the skull. The aim of this study was to obtain a differential diagnosis and to examine the skull using various methods. Sex and age were determined anthropologically. Radiological examinations were performed with plain X-ray imaging and a multislice computed tomography (CT) scanner. For histological analysis, samples of the lesion were taken. The pathological processing included staining after fixation, decalcification, and paraffin embedding. Hard-cut sections were also prepared. The individual was female. The age at death was between 30 and 50 years. There is an intensely calcified bone proliferation at the right side of the os frontalis. Plain X-ray and CT imaging showed a large sclerotic lesion in the area of the right temple with a partly bulging appearance. The inner boundary of the lesion shows multi-edged irregularities. There is a diffuse thickening of the right side. In the left skull vault, there is a mix of sclerotic areas and areas which appear to be normal with a clear differentiation between tabula interna, diploë and tabula externa. Histology showed mature organised bone tissue. Radiological and histological findings favour a benign condition. Differential diagnoses comprise osteomas which may occur, for example, in the setting of hereditary adenomatous polyposis coli related to Gardner syndrome.
Resumo:
The task of encoding and processing complex sensory input requires many types of transsynaptic signals. This requirement is served in part by an extensive group of neurotransmitter substances which may include thirty or more different compounds. At the next level of information processing, the existence of multiple receptors for a given neurotransmitter appears to be a widely used mechanism to generate multiple responses to a given first messenger (Snyder and Goodman, 1980). Despite the wealth of published data on GABA receptors, the existence of more than one GABA receptor was in doubt until the mid 1980's. Presently there is still disagreement on the number of types of GABA receptors, estimates for which range from two to four (DeFeudis, 1983; Johnston, 1985). Part of the problem in evaluating data concerning multiple receptor types is the lack of information on the number of gene products and their subsequent supramolecular organization in different neurons. In order to evaluate the question concerning the diversity of GABA receptors in the nervous system, we must rely on indirect information derived from a wide variety of experimental techniques. These include pharmacological binding studies to membrane fractions, electrophysiological studies, localization studies, purification studies, and functional assays. Almost all parts of the central and peripheral nervous system use GABA as a neurotransmitter, and these experimental techniques have therefore been applied to many different parts of the nervous system for the analysis of GABA receptor characteristics. We are left with a large amount of data from a wide variety of techniques derived from many parts of the nervous system. When this project was initiated in 1983, there were only a handful of pharmacological tools to assess the question of multiple GABA receptors. The approach adopted was to focus on a single model system, using a variety of experimental techniques, in order to evaluate the existence of multiple forms of GABA receptors. Using the in vitro rabbit retina, a combination of pharmacological binding studies, functional release studies and partial purification studies were undertaken to examine the GABA receptor composition of this tissue. Three types of GABA receptors were observed: Al receptors coupled to benzodiazepine and barbiturate modulation, and A2 or uncoupled GABA-A receptors, and GABA-B receptors. These results are evaluated and discussed in light of recent findings by others concerning the number and subtypes of GABA receptors in the nervous system. ^