978 resultados para High-resolution EEG
Resumo:
The ability to run General Circulation Models (GCMs) at ever-higher horizontal resolutions has meant that tropical cyclone simulations are increasingly credible. A hierarchy of atmosphere-only GCMs, based on the Hadley Centre Global Environmental Model (HadGEM1), with horizontal resolution increasing from approximately 270km to 60km (at 50N), is used to systematically investigate the impact of spatial resolution on the simulation of global tropical cyclone activity, independent of model formulation. Tropical cyclones are extracted from ensemble simulations and reanalyses of comparable resolutions using a feature-tracking algorithm. Resolution is critical for simulating storm intensity and convergence to observed storm intensities is not achieved with the model hierarchy. Resolution is less critical for simulating the annual number of tropical cyclones and their geographical distribution, which are well captured at resolutions of 135km or higher, particularly for Northern Hemisphere basins. Simulating the interannual variability of storm occurrence requires resolutions of 100km or higher; however, the level of skill is basin dependent. Higher resolution GCMs are increasingly able to capture the interannual variability of the large-scale environmental conditions that contribute to tropical cyclogenesis. Different environmental factors contribute to the interannual variability of tropical cyclones in the different basins: in the North Atlantic basin the vertical wind shear, potential intensity and low-level absolute vorticity are dominant, while in the North Pacific basins mid-level relative humidity and low-level absolute vorticity are dominant. Model resolution is crucial for a realistic simulation of tropical cyclone behaviour, and high-resolution GCMs are found to be valuable tools for investigating the global location and frequency of tropical cyclones.
Resumo:
We evaluate the effects of spatial resolution on the ability of a regional climate model to reproduce observed extreme precipitation for a region in the Southwestern United States. A total of 73 National Climate Data Center observational sites spread throughout Arizona and New Mexico are compared with regional climate simulations at the spatial resolutions of 50 km and 10 km for a 31 year period from 1980 to 2010. We analyze mean, 3-hourly and 24-hourly extreme precipitation events using WRF regional model simulations driven by NCEP-2 reanalysis. The mean climatological spatial structure of precipitation in the Southwest is well represented by the 10 km resolution but missing in the coarse (50 km resolution) simulation. However, the fine grid has a larger positive bias in mean summer precipitation than the coarse-resolution grid. The large overestimation in the simulation is in part due to scale-dependent deficiencies in the Kain-Fritsch convective parameterization scheme that generate excessive precipitation and induce a slow eastward propagation of the moist convective summer systems in the high-resolution simulation. Despite this overestimation in the mean, the 10 km simulation captures individual extreme summer precipitation events better than the 50 km simulation. In winter, however, the two simulations appear to perform equally in simulating extremes.
Resumo:
Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
The South Asian monsoon is one of the most significant manifestations of the seasonal cycle. It directly impacts nearly one third of the world’s population and also has substantial global influence. Using 27-year integrations of a high-resolution atmospheric general circulation model (Met Office Unified Model), we study changes in South Asian monsoon precipitation and circulation when horizontal resolution is increased from approximately 200 to 40 km at the equator (N96 to N512, 1.9 to 0.35◦). The high resolution, integration length and ensemble size of the dataset make this the most extensive dataset used to evaluate the resolution sensitivity of the South Asian monsoon to date. We find a consistent pattern of JJAS precipitation and circulation changes as resolution increases, which include a slight increase in precipitation over peninsular India, changes in Indian and Indochinese orographic rain bands, increasing wind speeds in the Somali Jet, increasing precipitation over the Maritime Continent islands and decreasing precipitation over the northern Maritime Continent seas. To diagnose which resolution related processes cause these changes we compare them to published sensitivity experiments that change regional orography and coastlines. Our analysis indicates that improved resolution of the East African Highlands results in the improved representation of the Somali Jet and further suggests that improved resolution of orography over Indochina and the Maritime Continent results in more precipitation over the Maritime Continent islands at the expense of reduced precipitation further north. We also evaluate the resolution sensitivity of monsoon depressions and lows, which contribute more precipitation over northeast India at higher resolution. We conclude that while increasing resolution at these scales does not solve the many monsoon biases that exist in GCMs, it has a number of small, beneficial impacts.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
Substantial low-frequency rainfall fluctuations occurred in the Sahel throughout the twentieth century, causing devastating drought. Modeling these low-frequency rainfall fluctuations has remained problematic for climate models for many years. Here we show using a combination of state-of-the-art rainfall observations and high-resolution global climate models that changes in organized heavy rainfall events carry most of the rainfall variability in the Sahel at multiannual to decadal time scales. Ability to produce intense, organized convection allows climate models to correctly simulate the magnitude of late-twentieth century rainfall change, underlining the importance of model resolution. Increasing model resolution allows a better coupling between large-scale circulation changes and regional rainfall processes over the Sahel. These results provide a strong basis for developing more reliable and skilful long-term predictions of rainfall (seasons to years) which could benefit many sectors in the region by allowing early adaptation to impending extremes.
Resumo:
This paper describes a visual stimulus generator (VSImG) capable of displaying a gray-scale, 256 x 256 x 8 bitmap image with a frame rate of 500 Hz using a boustrophedonic scanning technique. It is designed for experiments with motion-sensitive neurons of the fly`s visual system, where the flicker fusion frequency of the photoreceptors can reach up to 500 Hz. Devices with such a high frame rate are not commercially available, but are required, if sensory systems with high flicker fusion frequency are to be studied. The implemented hardware approach gives us complete real-time control of the displacement sequence and provides all the signals needed to drive an electrostatic deflection display. With the use of analog signals, very small high-resolution displacements, not limited by the image`s pixel size can be obtained. Very slow image displacements with visually imperceptible steps can also be generated. This can be of interest for other vision research experiments. Two different stimulus files can be used simultaneously, allowing the system to generate X-Y displacements on one display or independent movements on two displays as long as they share the same bitmap image. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This manuscript describes the development and validation of an ultra-fast, efficient, and high throughput analytical method based on ultra-high performance liquid chromatography (UHPLC) equipped with a photodiode array (PDA) detection system, for the simultaneous analysis of fifteen bioactive metabolites: gallic acid, protocatechuic acid, (−)-catechin, gentisic acid, (−)-epicatechin, syringic acid, p-coumaric acid, ferulic acid, m-coumaric acid, rutin, trans-resveratrol, myricetin, quercetin, cinnamic acid and kaempferol, in wines. A 50-mm column packed with 1.7-μm particles operating at elevated pressure (UHPLC strategy) was selected to attain ultra-fast analysis and highly efficient separations. In order to reduce the complexity of wine extract and improve the recovery efficiency, a reverse-phase solid-phase extraction (SPE) procedure using as sorbent a new macroporous copolymer made from a balanced ratio of two monomers, the lipophilic divinylbenzene and the hydrophilic N-vinylpyrrolidone (Oasis™ HLB), was performed prior to UHPLC–PDA analysis. The calibration curves of bioactive metabolites showed good linearity within the established range. Limits of detection (LOD) and quantification (LOQ) ranged from 0.006 μg mL−1 to 0.58 μg mL−1, and from 0.019 μg mL−1 to 1.94 μg mL−1, for gallic and gentisic acids, respectively. The average recoveries ± SD for the three levels of concentration tested (n = 9) in red and white wines were, respectively, 89 ± 3% and 90 ± 2%. The repeatability expressed as relative standard deviation (RSD) was below 10% for all the metabolites assayed. The validated method was then applied to red and white wines from different geographical origins (Azores, Canary and Madeira Islands). The most abundant component in the analysed red wines was (−)-epicatechin followed by (−)-catechin and rutin, whereas in white wines syringic and p-coumaric acids were found the major phenolic metabolites. The method was completely validated, providing a sensitive analysis for bioactive phenolic metabolites detection and showing satisfactory data for all the parameters tested. Moreover, was revealed as an ultra-fast approach allowing the separation of the fifteen bioactive metabolites investigated with high resolution power within 5 min.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The molecular structure of human uropepsin, an aspartic proteinase from the urine produced in the form of pepsinogen A in the gastric mucosa, has been determined by molecular replacement using human pepsin as the search model. Crystals belong to space group P2(1)2(1)2(1), with unit-cell parameters a = 50.99, b = 75.56, c = 89.90 Angstrom. Crystallographic refinement led to an R factor of 0.161 at 2.45 Angstrom resolution. The positions of 2437 non-H protein atoms in 326 residues have been determined and the model contains 143 water molecules. The structure is bilobal, consisting of two predominantly beta -sheet lobes which, as observed in other aspartic proteinases, are related by a pseudo-twofold axis. A model of the uropepsin-pepstatin complex has been constructed based on the high-resolution crystal structure of pepsin complexed with pepstatin.
Resumo:
The peroxisome proliferator-activated receptors (PPARs) regulate genes involved in lipid and carbohydrate metabolism, and are targets of drugs approved for human use. Whereas the crystallographic structure of the complex of full length PPAR gamma and RXR alpha is known, structural alterations induced by heterodimer formation and DNA contacts are not well understood. Herein, we report a small-angle X-ray scattering analysis of the oligomeric state of hPPAR gamma alone and in the presence of retinoid X receptor (RXR). The results reveal that, in contrast with other studied nuclear receptors, which predominantly form dimers in solution, hPPAR gamma remains in the monomeric form by itself but forms heterodimers with hRXR alpha. The low-resolution models of hPPAR gamma/RXR alpha complexes predict significant changes in opening angle between heterodimerization partners (LBD) and extended and asymmetric shape of the dimer (LBD-DBD) as compared with X-ray structure of the full-length receptor bound to DNA. These differences between our SAXS models and the high-resolution crystallographic structure might suggest that there are different conformations of functional heterodimer complex in solution. Accordingly, hydrogen/deuterium exchange experiments reveal that the heterodimer binding to DNA promotes more compact and less solvent-accessible conformation of the receptor complex.
Resumo:
Background Recent studies reported the association between SLCO1B1 polymorphisms and the development of statin-induced myopathy. In the scenario of the Brazilian population, being one of the most heterogeneous in the world, the main aim here was to evaluate SLCO1B1 polymorphisms according to ethnic groups as an initial step for future pharmacogenetic studies. Methods One hundred and eighty-two Amerindians plus 1,032 subjects from the general urban population were included. Genotypes for the SLCO1B1 rs4149056 (c.T521C, p.V174A, exon 5) and SLCO1B1 rs4363657 (g.T89595C, intron 11) polymorphisms were detected by polymerase chain reaction followed by high resolution melting analysis with the Rotor Gene 6000® instrument. Results The frequencies of the SLCO1B1 rs4149056 and rs4363657 C variant allele were higher in Amerindians (28.3% and 26.1%) and were lower in African descent subjects (5.7% and 10.8%) compared with Mulatto (14.9% and 18.2%) and Caucasian descent (14.8% and 15.4%) ethnic groups (p < 0.001 and p < 0.001, respectively). Linkage disequilibrium analysis show that these variant alleles are in different linkage disequilibrium patterns depending on the ethnic origin. Conclusion Our findings indicate interethnic differences for the SLCO1B1 rs4149056 C risk allele frequency among Brazilians. These data will be useful in the development of effective programs for stratifying individuals regarding adherence, efficacy and choice of statin-type.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Aufbau einer kontinuierlichen, mehrdimensionalen Hochleistungs-flüssigchromatographie-Anlage für die Trennung von Proteinen und Peptiden mit integrierter größenselektiver ProbenfraktionierungEs wurde eine mehrdimensionale HPLC-Trennmethode für Proteine und Peptide mit einem Molekulargewicht von <15 kDa entwickelt.Im ersten Schritt werden die Zielanalyte von höhermolekularen sowie nicht ionischen Bestandteilen mit Hilfe von 'Restricted Access Materialien' (RAM) mit Ionenaustauscher-Funktionalität getrennt. Anschließend werden die Proteine auf einer analytischen Ionenaustauscher-Säule sowie auf Reversed-Phase-Säulen getrennt. Zur Vermeidung von Probenverlusten wurde ein kontinuierlich arbeitendes, voll automatisiertes System auf Basis unterschiedlicher Trenngeschwindigkeiten und vier parallelen RP-Säulen aufgebaut.Es werden jeweils zwei RP-Säulen gleichzeitig, jedoch mit zeitlich versetztem Beginn eluiert, um durch flache Gradienten ausreichende Trennleistungen zu erhalten. Während die dritte Säule regeneriert wird, erfolgt das Beladen der vierte Säule durch Anreicherung der Proteine und Peptide am Säulenkopf. Während der Gesamtanalysenzeit von 96 Minuten werden in Intervallen von 4 Minuten Fraktionen aus der 1. Dimension auf die RP-Säulen überführt und innerhalb von 8 Minuten getrennt, wobei 24 RP-Chromatogramme resultieren.Als Testsubstanzen wurden u.a. Standardproteine, Proteine und Peptide aus humanem Hämofiltrat sowie aus Lungenfibroblast-Zellkulturüberständen eingesetzt. Weiterhin wurden Fraktionen gesammelt und mittels MALDI-TOF Massenspektrometrie untersucht. Bei einer Injektion wurden in den 24 RP-Chromatogrammen mehr als 1000 Peaks aufgelöst. Der theoretische Wert der Peakkapazität liegt bei ungefähr 3000.