873 resultados para Multiple escales method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: Nerve blocks using local anesthetics are widely used. High volumes are usually injected, which may predispose patients to associated adverse events. Introduction of ultrasound guidance facilitates the reduction of volume, but the minimal effective volume is unknown. In this study, we estimated the 50% effective dose (ED50) and 95% effective dose (ED95) volume of 1% mepivacaine relative to the cross-sectional area of the nerve for an adequate sensory block. METHODS: To reduce the number of healthy volunteers, we used a volume reduction protocol using the up-and-down procedure according to the Dixon average method. The ulnar nerve was scanned at the proximal forearm, and the cross-sectional area was measured by ultrasound. In the first volunteer, a volume of 0.4 mL/mm of nerve cross-sectional area was injected under ultrasound guidance in close proximity to and around the nerve using a multiple injection technique. The volume in the next volunteer was reduced by 0.04 mL/mm in case of complete blockade and augmented by the same amount in case of incomplete sensory blockade within 20 mins. After 3 up-and-down cycles, ED50 and ED95 were estimated. Volunteers and physicians performing the block were blinded to the volume used. RESULTS: A total 17 of volunteers were investigated. The ED50 volume was 0.08 mL/mm (SD, 0.01 mL/mm), and the ED95 volume was 0.11 mL/mm (SD, 0.03 mL/mm). The mean cross-sectional area of the nerves was 6.2 mm (1.0 mm). CONCLUSIONS: Based on the ultrasound measured cross-sectional area and using ultrasound guidance, a mean volume of 0.7 mL represents the ED95 dose of 1% mepivacaine to block the ulnar nerve at the proximal forearm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eight premature infants ventilated for hyaline membrane disease and enrolled in the OSIRIS surfactant trial were studied. Lung mechanics, gas exchange [PaCO2, arterial/alveolar PO2 ratio (a/A ratio)], and ventilator settings were determined 20 minutes before and 20 minutes after the end of Exosurf instillation, and subsequently at 12-24 hour intervals. Respiratory system compliance (Crs) and resistance (Rrs) were measured by means of the single breath occlusion method. After surfactant instillation there were no significant immediate changes in PaCO2 (36 vs. 37 mmHg), a/A ratio (0.23 vs. 0.20), Crs (0.32 vs. 0.31 mL/cm H2O/kg), and Rrs (0.11 vs. 0.16 cmH2O/mL/s) (pooled data of 18 measurement pairs). During the clinical course, mean a/A ratio improved significantly each time from 0.17 (time 0) to 0.29 (time 12-13 hours), to 0.39 (time 24-36 hours) and to 0.60 (time 48-61 hours), although mean airway pressure was reduced substantially. Mean Crs increased significantly from 0.28 mL/cmH2O/kg (time 0) to 0.38 (time 12-13 hours), to 0.37 (time 24-38 hours), and to 0.52 (time 48-61 hours), whereas mean Rrs increased from 0.10 cm H2O/mL/s (time 0) to 0.11 (time 12-13 hours), to 0.13 (time 24-36 hours) and to (time 48-61 hours) with no overall significance. A highly significant correlation was found between Crs and a/A ratio (r = 0.698, P less than 0.001). We conclude that Exosurf does not induce immediate changes in oxygenation as does the instillation of (modified) natural surfactant preparations. However, after 12 and 24 hours of treatment oxygenation and Crs improve significantly.(ABSTRACT TRUNCATED AT 250 WORDS)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combinatorial protocol (CP) is introduced here to interface it with the multiple linear regression (MLR) for variable selection. The efficiency of CP-MLR is primarily based on the restriction of entry of correlated variables to the model development stage. It has been used for the analysis of Selwood et al data set [16], and the obtained models are compared with those reported from GFA [8] and MUSEUM [9] approaches. For this data set CP-MLR could identify three highly independent models (27, 28 and 31) with Q2 value in the range of 0.632-0.518. Also, these models are divergent and unique. Even though, the present study does not share any models with GFA [8], and MUSEUM [9] results, there are several descriptors common to all these studies, including the present one. Also a simulation is carried out on the same data set to explain the model formation in CP-MLR. The results demonstrate that the proposed method should be able to offer solutions to data sets with 50 to 60 descriptors in reasonable time frame. By carefully selecting the inter-parameter correlation cutoff values in CP-MLR one can identify divergent models and handle data sets larger than the present one without involving excessive computer time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To clinically evaluate the treatment of Miller Class I and II multiple adjacent gingival recessions using the modified coronally advanced tunnel technique combined with a newly developed bioresorbable collagen matrix of porcine origin. METHOD AND MATERIALS Eight healthy patients exhibiting at least three multiple Miller Class I and II multiple adjacent gingival recessions (a total of 42 recessions) were consecutively treated by means of the modified coronally advanced tunnel technique and collagen matrix. The following clinical parameters were assessed at baseline and 12 months postoperatively: full mouth plaque score (FMPS), full mouth bleeding score (FMBS), probing depth (PD), recession depth (RD), recession width (RW), keratinized tissue thickness (KTT), and keratinized tissue width (KTW). The primary outcome variable was complete root coverage. RESULTS Neither allergic reactions nor soft tissue irritations or matrix exfoliations occurred. Postoperative pain and discomfort were reported to be low, and patient acceptance was generally high. At 12 months, complete root coverage was obtained in 2 out of the 8 patients and 30 of the 42 recessions (71%). CONCLUSION Within their limits, the present results indicate that treatment of Miller Class I and II multiple adjacent gingival recessions by means of the modified coronally advanced tunnel technique and collagen matrix may result in statistically and clinically significant complete root coverage. Further studies are warranted to evaluate the performance of collagen matrix compared with connective tissue grafts and other soft tissue grafts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA), a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE) imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL), non-enhancing lesions (NEL) and normal appearing white matter (NAWM) in multiple sclerosis (MS). METHODS We investigated 18 patients with MS and clinical isolated syndrome (CIS), according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla). Tissues of interest (TOIs) were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs) were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG = 0.0005). Significant differences in the DTPA texture maps were found during inflow (52 parameters), outflow (40 parameters) and reperfusion (62 parameters). The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2-hypersignal, on a numeric scale allowing for a more subtle grading of MS-lesions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-pronged approach for the automatic quantitation of multiple sclerosis (MS) lesions on magnetic resonance (MR) images has been developed. This method includes the design and use of a pulse sequence for improved lesion-to-tissue contrast (LTC) and seeks to identify and minimize the sources of false lesion classifications in segmented images. The new pulse sequence, referred to as AFFIRMATIVE (Attenuation of Fluid by Fast Inversion Recovery with MAgnetization Transfer Imaging with Variable Echoes), improves the LTC, relative to spin-echo images, by combining Fluid-Attenuated Inversion Recovery (FLAIR) and Magnetization Transfer Contrast (MTC). In addition to acquiring fast FLAIR/MTC images, the AFFIRMATIVE sequence simultaneously acquires fast spin-echo (FSE) images for spatial registration of images, which is necessary for accurate lesion quantitation. Flow has been found to be a primary source of false lesion classifications. Therefore, an imaging protocol and reconstruction methods are developed to generate "flow images" which depict both coherent (vascular) and incoherent (CSF) flow. An automatic technique is designed for the removal of extra-meningeal tissues, since these are known to be sources of false lesion classifications. A retrospective, three-dimensional (3D) registration algorithm is implemented to correct for patient movement which may have occurred between AFFIRMATIVE and flow imaging scans. Following application of these pre-processing steps, images are segmented into white matter, gray matter, cerebrospinal fluid, and MS lesions based on AFFIRMATIVE and flow images using an automatic algorithm. All algorithms are seamlessly integrated into a single MR image analysis software package. Lesion quantitation has been performed on images from 15 patient volunteers. The total processing time is less than two hours per patient on a SPARCstation 20. The automated nature of this approach should provide an objective means of monitoring the progression, stabilization, and/or regression of MS lesions in large-scale, multi-center clinical trials. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dike swarms consisting of tens to thousands of subparallel dikes are commonly observed at Earth's surface, raising the possibility of simultaneous propagation of two or more dikes at various stages of a swarm's development. The behavior of multiple propagating dikes differs from that of a single dike owing to the interacting stress fields associated with each dike. We analyze an array of parallel, periodically spaced dikes that grow simultaneously from an overpressured source into a semi-infinite, linear elastic host rock. To simplify the analysis, we assume steady state (constant velocity) magma flow and dike propagation. We use a perturbation method to analyze the coupled, nonlinear problem of multiple dike propagation and magma transport. The stress intensity factor at the dike tips and the opening displacements of the dike surfaces are calculated. The numerical results show that dike spacing has a profound effect on the behavior of dike propagation. The stress intensity factors at the tips of parallel dikes decrease with a decrease in dike spacing and are significantly smaller than that for a single dike with the same length. The reduced stress intensity factor indicates that, compared to a single dike, propagation of parallel dikes is more likely to be arrested under otherwise the same conditions. It also implies that fracture toughness of the host rock in a high confining pressure environment may not be as high as inferred from the propagation of a single dike. Our numerical results suggest fracture toughness values on the order of 100 MPa root m. The opening displacements for parallel dikes are smaller than that for a single dike, which results in higher magma pressure gradients in parallel dikes and lower flux of magma transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mismatching of alveolar ventilation and perfusion (VA/Q) is the major determinant of impaired gas exchange. The gold standard for measuring VA/Q distributions is based on measurements of the elimination and retention of infused inert gases. Conventional multiple inert gas elimination technique (MIGET) uses gas chromatography (GC) to measure the inert gas partial pressures, which requires tonometry of blood samples with a gas that can then be injected into the chromatograph. The method is laborious and requires meticulous care. A new technique based on micropore membrane inlet mass spectrometry (MMIMS) facilitates the handling of blood and gas samples and provides nearly real-time analysis. In this study we compared MIGET by GC and MMIMS in 10 piglets: 1) 3 with healthy lungs; 2) 4 with oleic acid injury; and 3) 3 with isolated left lower lobe ventilation. The different protocols ensured a large range of normal and abnormal VA/Q distributions. Eight inert gases (SF6, krypton, ethane, cyclopropane, desflurane, enflurane, diethyl ether, and acetone) were infused; six of these gases were measured with MMIMS, and six were measured with GC. We found close agreement of retention and excretion of the gases and the constructed VA/Q distributions between GC and MMIMS, and predicted PaO2 from both methods compared well with measured PaO2. VA/Q by GC produced more widely dispersed modes than MMIMS, explained in part by differences in the algorithms used to calculate VA/Q distributions. In conclusion, MMIMS enables faster measurement of VA/Q, is less demanding than GC, and produces comparable results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the development of a new sensitive method for the analysis of alpha-dicarbonyls glyoxal (G) and methylglyoxal (MG) in environmental ice and snow is presented. Stir bar sorptive extraction with in situ derivatization and liquid desorption (SBSE-LD) was used for sample extraction, enrichment, and derivatization. Measurements were carried out using high-performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). As part of the method development, SBSE-LD parameters such as extraction time, derivatization reagent, desorption time and solvent, and the effect of NaCl addition on the SBSE efficiency as well as measurement parameters of HPLC-ESI-MS/MS were evaluated. Calibration was performed in the range of 1–60 ng/mL using spiked ultrapure water samples, thus incorporating the complete SBSE and derivatization process. 4-Fluorobenzaldehyde was applied as internal standard. Inter-batch precision was <12 % RSD. Recoveries were determined by means of spiked snow samples and were 78.9 ± 5.6 % for G and 82.7 ± 7.5 % for MG, respectively. Instrumental detection limits of 0.242 and 0.213 ng/mL for G and MG were achieved using the multiple reaction monitoring mode. Relative detection limits referred to a sample volume of 15 mL were 0.016 ng/mL for G and 0.014 ng/mL for MG. The optimized method was applied for the analysis of snow samples from Mount Hohenpeissenberg (close to the Meteorological Observatory Hohenpeissenberg, Germany) and samples from an ice core from Upper Grenzgletscher (Monte Rosa massif, Switzerland). Resulting concentrations were 0.085–16.3 ng/mL for G and 0.126–3.6 ng/mL for MG. Concentrations of G and MG in snow were 1–2 orders of magnitude higher than in ice core samples. The described method represents a simple, green, and sensitive analytical approach to measure G and MG in aqueous environmental samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bone marrow ablation, i.e., the complete sterilization of the active bone marrow, followed by bone marrow transplantation (BMT) is a comment treatment of hematological malignancies. The use of targeted bone-seeking radiopharmaceuticals to selectively deliver radiation to the adjacent bone marrow cavities while sparing normal tissues is a promising technique. Current radiopharmaceutical treatment planning methods do not properly compensate for the patient-specific variable distribution of radioactive material within the skeleton. To improve the current method of internal dosimetry, novel methods for measuring the radiopharmaceutical distribution within the skeleton were developed. 99mTc-MDP was proven as an adequate surrogate for measuring 166Ho-DOTMP skeletal uptake and biodistribution, allowing these measures to be obtained faster, safer, and with higher spatial resolution. This translates directly into better measurements of the radiation dose distribution within the bone marrow. The resulting bone marrow dose-volume histograms allow prediction of the patient disease response where conventional organ scale dosimetry failed. They indicate that complete remission is only achieved when greater than 90% of the bone marrow receives at least 30 Gy. ^ Comprehensive treatment planning requires combining target and non-target organ dosimetry. Organs in the urinary tract were of special concern. The kidney dose is primarily dependent upon the mean transit time of 166 Ho-DOTMP through the kidney. Deconvolution analysis of renograms predicted a mean transit time of 2.6 minutes for 166Ho-DOTMP. The radiation dose to the urinary bladder wall is dependent upon numerous factors including patient hydration and void schedule. For beta-emitting isotopes such as 166Ho, reduction of the bladder wall dose is best accomplished through good patient hydration and ensuring a partially full bladder at the time of injection. Encouraging the patient to void frequently, or catheterizing the patient without irrigation, will not significantly reduce the bladder wall dose. ^ The results from this work will produce the most advanced treatment planning methodology for bone marrow ablation therapy using radioisotopes currently available. Treatments can be tailored specifically for each patient, including the addition of concomitant total body irradiation for patients with unfavorable dose distributions, to deliver a desired patient disease response, while minimizing the dose or toxicity to non-target organs. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) magnetic resonance imaging (MRI) technique has inherent advantages over other fast imaging methods, including robust motion correction, reduced image distortion, and resistance to off-resonance effects. These features make PROPELLER highly desirable for T2*-sensitive imaging, high-resolution diffusion imaging, and many other applications. However, PROPELLER has been predominantly implemented as a fast spin-echo (FSE) technique, which is insensitive to T2* contrast, and requires time-inefficient signal averaging to achieve adequate signal-to-noise ratio (SNR) for many applications. These issues presently constrain the potential clinical utility of FSE-based PROPELLER. ^ In this research, our aim was to extend and enhance the potential applications of PROPELLER MRI by developing a novel multiple gradient echo PROPELLER (MGREP) technique that can overcome the aforementioned limitations. The MGREP pulse sequence was designed to acquire multiple gradient-echo images simultaneously, without any increase in total scan time or RF energy deposition relative to FSE-based PROPELLER. A new parameter was also introduced for direct user-control over gradient echo spacing, to allow variable sensitivity to T2* contrast. In parallel to pulse sequence development, an improved algorithm for motion correction was also developed and evaluated against the established method through extensive simulations. The potential advantages of MGREP over FSE-based PROPELLER were illustrated via three specific applications: (1) quantitative T2* measurement, (2) time-efficient signal averaging, and (3) high-resolution diffusion imaging. Relative to the FSE-PROPELLER method, the MGREP sequence was found to yield quantitative T2* values, increase SNR by ∼40% without any increase in acquisition time or RF energy deposition, and noticeably improve image quality in high-resolution diffusion maps. In addition, the new motion algorithm was found to improve the performance considerably in motion-artifact reduction. ^ Overall, this work demonstrated a number of enhancements and extensions to existing PROPELLER techniques. The new technical capabilities of PROPELLER imaging, developed in this thesis research, are expected to serve as the foundation for further expanding the scope of PROPELLER applications. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^