953 resultados para Large retinal datasets
Resumo:
Yhteenveto: Laajan merialueen dynamiikan mallintaminen
Resumo:
The near flow field of small aspect ratio elliptic turbulent free jets (issuing from nozzle and orifice) was experimentally studied using a 2D PIV. Two point velocity correlations in these jets revealed the extent and orientation of the large scale structures in the major and minor planes. The spatial filtering of the instantaneous velocity field using Gaussian convolution kernel shows that while a single large vortex ring circumscribing the jet seems to be present at the exit of nozzle, the orifice jet exhibited a number of smaller vortex ring pairs close to jet exit. The smaller length scale observed in the case of the orifice jet is representative of the smaller azimuthal vortex rings that generate axial vortex field as they are convected. This results in the axis-switching in the case of orifice jet and may have a mechanism different from the self induction process as observed in the case of contoured nozzle jet flow.
Resumo:
In this paper, we present a comparison between the sensitivity of SC-FDMA and OFDMA schemes to large carrier frequency offsets (CFO) and timing offsets (TO) of different users on the uplink. Our study shows the following observations: 1) In the ideal case of zero CFOs and TOs (i.e., perfect synchronization), the uncoded BER performance of SC-FDMA with frequency domain MMSE equalizer is better than that of OFDMA due to the inherent frequency diversity that is possible in SCFDMA. Also, because of inter-symbol interference in SC-FDMA, the performance of SC-FDMA with MMSE equalizer can be further improved by using low-complexity interference cancellation (IC) techniques. 2) In the presence of large CFOs and TOs, significant multiuser interference (MUI) gets introduced, and hence the performance of SC-FDMA with MMSE equalizer can get worse than that of OFDMA. However, the performance advantage of SC-FDMA with MMSE equalizer over OFDMA (due to the potential for frequency diversity benefit in SC-FDMA) can be restored by adopting multistage IC techniques, using the knowledge of CFOs and TOs of different users at the receiver
Resumo:
A simple and efficient two-step hybrid electrochemical-thermal route was developed for the synthesis of large quantity of ZnO nanoparticles using aqueous sodium bicarbonate electrolyte and sacrificial Zn anode and cathode in an undivided cell under galvanostatic mode at room temperature. The bath concentration and current density were varied from 30 to 120 mmol and 0.05 to 1.5 A/dm(2). The electrochemically generated precursor was calcined for an hour at different range of temperature from 140 to 600 A degrees C. The calcined samples were characterized by XRD, SEM/EDX, TEM, TG-DTA, FT-IR, and UV-Vis spectral methods. Rietveld refinement of X-ray data indicates that the calcined compound exhibits hexagonal (Wurtzite) structure with space group of P63mc (No. 186). The crystallite sizes were in the range of 22-75 nm based on Debye-Scherrer equation. The TEM results reveal that the particle sizes were in the order of 30-40 nm. The blue shift was noticed in UV-Vis absorption spectra, the band gaps were found to be 5.40-5.11 eV. Scanning electron micrographs suggest that all the samples were randomly oriented granular morphology.
Resumo:
In this paper, we propose a training-based channel estimation scheme for large non-orthogonal space-time block coded (STBC) MIMO systems.The proposed scheme employs a block transmission strategy where an N-t x N-t pilot matrix is sent (for training purposes) followed by several N-t x N-t square data STBC matrices, where Nt is the number of transmit antennas. At the receiver, we iterate between channel estimation (using an MMSE estimator) and detection (using a low-complexity likelihood ascent search (LAS) detector) till convergence or for a fixed number of iterations. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed scheme at low complexities. The fact that we could show such good results for large STBCs (e.g., 16 x 16 STBC from cyclic division algebras) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot-based channel estimation and turbo coding) establishes the effectiveness of the proposed scheme.
Resumo:
An attempt to diagnose the dominant forcings which drive the large-scale vertical velocities over the monsoon region has been made by computing the forcings like diabatic heating fields,etc. and the large-scale vertical velocities driven by these forcings for the contrasting periods of active and break monsoon situations; in order to understand the rainfall variability associated with them. Computation of diabatic heating fields show us that among different components of diabatic heating it is the convective heating that dominates at mid-tropospheric levels during an active monsoon period; whereas it is the sensible heating at the surface that is important during a break period. From vertical velocity calculations we infer that the prime differences in the large-scale vertical velocities seen throughout the depth of the atmosphere are due to the differences in the orders of convective heating; the maximum rate of latent heating being more than 10 degrees Kelvin per day during an active monsoon period; whereas during a break monsoon period it is of the order of 2 degrees Kelvin per day at mid-tropospheric levels. At low levels of the atmosphere, computations show that there is large-scale ascent occurring over a large spatial region, driven only by the dynamic forcing associated with vorticity and temperature advection during an active monsoon period. However, during a break monsoon period such large-scale spatial organization in rising motion is not seen. It is speculated that these differences in the low-level large-scale ascent might be causing differences in convective heating because the weaker the low level ascent, the lesser the convective instability which produces deep cumulus clouds and hence lesser the associated latent heat release. The forcings due to other components of diabatic heating, namely, the sensible heating and long wave radiative cooling do not influence the large-scale vertical velocities significantly.
Resumo:
We propose a unified model for large signal and small signal non-quasi-static analysis of long channel symmetric double gate MOSFET. The model is physics based and relies only on the very basic approximation needed for a charge-based model. It is based on the EKV formalism Enz C, Vittoz EA. Charge based MOS transistor modeling. Wiley; 2006] and is valid in all regions of operation and thus suitable for RF circuit design. Proposed model is verified with professional numerical device simulator and excellent agreement is found. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Gene expression is one of the most critical factors influencing the phenotype of a cell. As a result of several technological advances, measuring gene expression levels has become one of the most common molecular biological measurements to study the behaviour of cells. The scientific community has produced enormous and constantly increasing collection of gene expression data from various human cells both from healthy and pathological conditions. However, while each of these studies is informative and enlighting in its own context and research setup, diverging methods and terminologies make it very challenging to integrate existing gene expression data to a more comprehensive view of human transcriptome function. On the other hand, bioinformatic science advances only through data integration and synthesis. The aim of this study was to develop biological and mathematical methods to overcome these challenges and to construct an integrated database of human transcriptome as well as to demonstrate its usage. Methods developed in this study can be divided in two distinct parts. First, the biological and medical annotation of the existing gene expression measurements needed to be encoded by systematic vocabularies. There was no single existing biomedical ontology or vocabulary suitable for this purpose. Thus, new annotation terminology was developed as a part of this work. Second part was to develop mathematical methods correcting the noise and systematic differences/errors in the data caused by various array generations. Additionally, there was a need to develop suitable computational methods for sample collection and archiving, unique sample identification, database structures, data retrieval and visualization. Bioinformatic methods were developed to analyze gene expression levels and putative functional associations of human genes by using the integrated gene expression data. Also a method to interpret individual gene expression profiles across all the healthy and pathological tissues of the reference database was developed. As a result of this work 9783 human gene expression samples measured by Affymetrix microarrays were integrated to form a unique human transcriptome resource GeneSapiens. This makes it possible to analyse expression levels of 17330 genes across 175 types of healthy and pathological human tissues. Application of this resource to interpret individual gene expression measurements allowed identification of tissue of origin with 92.0% accuracy among 44 healthy tissue types. Systematic analysis of transcriptional activity levels of 459 kinase genes was performed across 44 healthy and 55 pathological tissue types and a genome wide analysis of kinase gene co-expression networks was done. This analysis revealed biologically and medically interesting data on putative kinase gene functions in health and disease. Finally, we developed a method for alignment of gene expression profiles (AGEP) to perform analysis for individual patient samples to pinpoint gene- and pathway-specific changes in the test sample in relation to the reference transcriptome database. We also showed how large-scale gene expression data resources can be used to quantitatively characterize changes in the transcriptomic program of differentiating stem cells. Taken together, these studies indicate the power of systematic bioinformatic analyses to infer biological and medical insights from existing published datasets as well as to facilitate the interpretation of new molecular profiling data from individual patients.
Resumo:
Design of speaker identification schemes for a small number of speakers (around 10) with a high degree of accuracy in controlled environment is a practical proposition today. When the number of speakers is large (say 50–100), many of these schemes cannot be directly extended, as both recognition error and computation time increase monotonically with population size. The feature selection problem is also complex for such schemes. Though there were earlier attempts to rank order features based on statistical distance measures, it has been observed only recently that the best two independent measurements are not the same as the combination in two's for pattern classification. We propose here a systematic approach to the problem using the decision tree or hierarchical classifier with the following objectives: (1) Design of optimal policy at each node of the tree given the tree structure i.e., the tree skeleton and the features to be used at each node. (2) Determination of the optimal feature measurement and decision policy given only the tree skeleton. Applicability of optimization procedures such as dynamic programming in the design of such trees is studied. The experimental results deal with the design of a 50 speaker identification scheme based on this approach.
Resumo:
Proteases belonging to the M20 family are characterized by diverse substrate specificity and participate in several metabolic pathways. The Staphylococcus aureus metallopeptidase, Sapep, is a member of the aminoacylase-I/M20 protein family. This protein is a Mn2+-dependent dipeptidase. The crystal structure of this protein in the Mn2+-bound form and in the open, metal-free state suggests that large interdomain movements could potentially regulate the activity of this enzyme. We note that the extended inactive conformation is stabilized by a disulfide bond in the vicinity of the active site. Although these cysteines, Cys(155) and Cys(178), are not active site residues, the reduced form of this enzyme is substantially more active as a dipeptidase. These findings acquire further relevance given a recent observation that this enzyme is only active in methicillin-resistant S. aureus. The structural and biochemical features of this enzyme provide a template for the design of novel methicillin-resistant S. aureus-specific therapeutics.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
This study uses the European Centre for Medium-Range Weather Forecasts (ECMWF) model-generated high-resolution 10-day-long predictions for the Year of Tropical Convection (YOTC) 2008. Precipitation forecast skills of the model over the tropics are evaluated against the Tropical Rainfall Measuring Mission (TRMM) estimates. It has been shown that the model was able to capture the monthly to seasonal mean features of tropical convection reasonably. Northward propagation of convective bands over the Bay of Bengal was also forecasted realistically up to 5 days in advance, including the onset phase of the monsoon during the first half of June 2008. However, large errors exist in the daily datasets especially for longer lead times over smaller domains. For shorter lead times (less than 4-5 days), forecast errors are much smaller over the oceans than over land. Moreover, the rate of increase of errors with lead time is rapid over the oceans and is confined to the regions where observed precipitation shows large day-to-day variability. It has been shown that this rapid growth of errors over the oceans is related to the spatial pattern of near-surface air temperature. This is probably due to the one-way air-sea interaction in the atmosphere-only model used for forecasting. While the prescribed surface temperature over the oceans remain realistic at shorter lead times, the pattern and hence the gradient of the surface temperature is not altered with change in atmospheric parameters at longer lead times. It has also been shown that the ECMWF model had considerable difficulties in forecasting very low and very heavy intensity of precipitation over South Asia. The model has too few grids with ``zero'' precipitation and heavy (>40 mm day(-1)) precipitation. On the other hand, drizzle-like precipitation is too frequent in the model compared to that in the TRMM datasets. Further analysis shows that a major source of error in the ECMWF precipitation forecasts is the diurnal cycle over the South Asian monsoon region. The peak intensity of precipitation in the model forecasts over land (ocean) appear about 6 (9) h earlier than that in the observations. Moreover, the amplitude of the diurnal cycle is much higher in the model forecasts compared to that in the TRMM estimates. It has been seen that the phase error of the diurnal cycle increases with forecast lead time. The error in monthly mean 3-hourly precipitation forecasts is about 2-4 times of the error in the daily mean datasets. Thus, effort should be given to improve the phase and amplitude forecast of the diurnal cycle of precipitation from the model.
Resumo:
We report a detailed investigation of resistance noise in single layer graphene films on Si/SiO2 substrates obtained by chemical vapor deposition (CVD) on copper foils. We find that noise in these systems to be rather large, and when expressed in the form of phenomenological Hooge equation, it corresponds to Hooge parameter as large as 0.1-0.5. We also find the variation in the noise magnitude with the gate voltage (or carrier density) and temperature to be surprisingly weak, which is also unlike the behavior of noise in other forms of graphene, in particular those from exfoliation. (C) 2010 American Institute of Physics. doi:10.1063/1.3493655]