937 resultados para rank-based procedure


Relevância:

30.00% 30.00%

Publicador:

Resumo:

High density oligonucleotide expression arrays are a widely used tool for the measurement of gene expression on a large scale. Affymetrix GeneChip arrays appear to dominate this market. These arrays use short oligonucleotides to probe for genes in an RNA sample. Due to optical noise, non-specific hybridization, probe-specific effects, and measurement error, ad-hoc measures of expression, that summarize probe intensities, can lead to imprecise and inaccurate results. Various researchers have demonstrated that expression measures based on simple statistical models can provide great improvements over the ad-hoc procedure offered by Affymetrix. Recently, physical models based on molecular hybridization theory, have been proposed as useful tools for prediction of, for example, non-specific hybridization. These physical models show great potential in terms of improving existing expression measures. In this paper we demonstrate that the system producing the measured intensities is too complex to be fully described with these relatively simple physical models and we propose empirically motivated stochastic models that compliment the above mentioned molecular hybridization theory to provide a comprehensive description of the data. We discuss how the proposed model can be used to obtain improved measures of expression useful for the data analysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past, protease-substrate finding proved to be rather haphazard and was executed by in vitro cleavage assays using singly selected targets. In the present study, we report the first protease proteomic approach applied to meprin, an astacin-like metalloendopeptidase, to determine physiological substrates in a cell-based system of Madin-Darby canine kidney epithelial cells. A simple 2D IEF/SDS/PAGE-based image analysis procedure was designed to find candidate substrates in conditioned media of Madin-Darby canine kidney cells expressing meprin in zymogen or in active form. The method enabled the discovery of hitherto unknown meprin substrates with shortened (non-trypsin-generated) N- and C-terminally truncated cleavage products in peptide fragments upon LC-MS/MS analysis. Of 22 (17 nonredundant) candidate substrates identified, the proteolytic processing of vinculin, lysyl oxidase, collagen type V and annexin A1 was analysed by means of immunoblotting validation experiments. The classification of substrates into functional groups may propose new functions for meprins in the regulation of cell homeostasis and the extracellular environment, and in innate immunity, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airway access is needed for a number of experimental animal models, and the majority of animal research is based on mouse models. Anatomical conditions in mice are small, and the narrow glottic opening allows intubation only with a subtle technique. We therefore developed a microscopic endotracheal intubation method with a wire guide technique in mice anaesthetized with halothane in oxygen. The mouse is hung perpendicularly with its incisors on a thread fixed on a vertical plate. The tongue is placed with a pair of forceps between the left hand's thumb and forefinger and slightly pulled, while the neck and thorax are positioned using the third and fourth fingers. By doing so, the neck can be slightly stretched, which allows optimal visualization of the larynx and the vocal cords. To ensure a safe intubation, a fine wire guide is placed under vision between the vocal cords and advanced about 5 mm into the trachea. An intravenous 22G x 1 in. plastic or Teflon catheter is guided over this wire. In a series of 41 mice, between 21 and 38 g, the success rate for the first intubation attempt was >95%. Certainty of the judgement procedure was 100% and success rate was higher using the described method when compared with a transillumination method in a further series. The technique is safe, less invasive than tracheostomy and suitable for controlled ventilation and pulmonary substance application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy community detection is to identify fuzzy communities in a network, which are groups of vertices in the network such that the membership of a vertex in one community is in [0,1] and that the sum of memberships of vertices in all communities equals to 1. Fuzzy communities are pervasive in social networks, but only a few works have been done for fuzzy community detection. Recently, a one-step forward extension of Newman’s Modularity, the most popular quality function for disjoint community detection, results into the Generalized Modularity (GM) that demonstrates good performance in finding well-known fuzzy communities. Thus, GMis chosen as the quality function in our research. We first propose a generalized fuzzy t-norm modularity to investigate the effect of different fuzzy intersection operators on fuzzy community detection, since the introduction of a fuzzy intersection operation is made feasible by GM. The experimental results show that the Yager operator with a proper parameter value performs better than the product operator in revealing community structure. Then, we focus on how to find optimal fuzzy communities in a network by directly maximizing GM, which we call it Fuzzy Modularity Maximization (FMM) problem. The effort on FMM problem results into the major contribution of this thesis, an efficient and effective GM-based fuzzy community detection method that could automatically discover a fuzzy partition of a network when it is appropriate, which is much better than fuzzy partitions found by existing fuzzy community detection methods, and a crisp partition of a network when appropriate, which is competitive with partitions resulted from the best disjoint community detections up to now. We address FMM problem by iteratively solving a sub-problem called One-Step Modularity Maximization (OSMM). We present two approaches for solving this iterative procedure: a tree-based global optimizer called Find Best Leaf Node (FBLN) and a heuristic-based local optimizer. The OSMM problem is based on a simplified quadratic knapsack problem that can be solved in linear time; thus, a solution of OSMM can be found in linear time. Since the OSMM algorithm is called within FBLN recursively and the structure of the search tree is non-deterministic, we can see that the FMM/FBLN algorithm runs in a time complexity of at least O (n2). So, we also propose several highly efficient and very effective heuristic algorithms namely FMM/H algorithms. We compared our proposed FMM/H algorithms with two state-of-the-art community detection methods, modified MULTICUT Spectral Fuzzy c-Means (MSFCM) and Genetic Algorithm with a Local Search strategy (GALS), on 10 real-world data sets. The experimental results suggest that the H2 variant of FMM/H is the best performing version. The H2 algorithm is very competitive with GALS in producing maximum modularity partitions and performs much better than MSFCM. On all the 10 data sets, H2 is also 2-3 orders of magnitude faster than GALS. Furthermore, by adopting a simply modified version of the H2 algorithm as a mutation operator, we designed a genetic algorithm for fuzzy community detection, namely GAFCD, where elite selection and early termination are applied. The crossover operator is designed to make GAFCD converge fast and to enhance GAFCD’s ability of jumping out of local minimums. Experimental results on all the data sets show that GAFCD uncovers better community structure than GALS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipment. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: Nerve blocks using local anesthetics are widely used. High volumes are usually injected, which may predispose patients to associated adverse events. Introduction of ultrasound guidance facilitates the reduction of volume, but the minimal effective volume is unknown. In this study, we estimated the 50% effective dose (ED50) and 95% effective dose (ED95) volume of 1% mepivacaine relative to the cross-sectional area of the nerve for an adequate sensory block. METHODS: To reduce the number of healthy volunteers, we used a volume reduction protocol using the up-and-down procedure according to the Dixon average method. The ulnar nerve was scanned at the proximal forearm, and the cross-sectional area was measured by ultrasound. In the first volunteer, a volume of 0.4 mL/mm of nerve cross-sectional area was injected under ultrasound guidance in close proximity to and around the nerve using a multiple injection technique. The volume in the next volunteer was reduced by 0.04 mL/mm in case of complete blockade and augmented by the same amount in case of incomplete sensory blockade within 20 mins. After 3 up-and-down cycles, ED50 and ED95 were estimated. Volunteers and physicians performing the block were blinded to the volume used. RESULTS: A total 17 of volunteers were investigated. The ED50 volume was 0.08 mL/mm (SD, 0.01 mL/mm), and the ED95 volume was 0.11 mL/mm (SD, 0.03 mL/mm). The mean cross-sectional area of the nerves was 6.2 mm (1.0 mm). CONCLUSIONS: Based on the ultrasound measured cross-sectional area and using ultrasound guidance, a mean volume of 0.7 mL represents the ED95 dose of 1% mepivacaine to block the ulnar nerve at the proximal forearm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: (1) To assess spinal cord blood flow (SCBF) during surgical treatment of disk extrusion in dogs and (2) to investigate associations between SCBF, clinical signs, presurgical MRI images, and 24-hour surgical outcome. STUDY DESIGN: Cohort study. ANIMALS: Chondrodystrophic dogs with thoracolumbar disk extrusion (n=12). METHODS: Diagnosis was based on clinical signs and MRI findings, and confirmed at surgery. Regional SCBF was measured intraoperatively by laser-Doppler flowmetry before, immediately after surgical spinal cord decompression, and after 15 minutes of lavaging the lesion. Care was taken to ensure a standardized surgical procedure to minimize factors that could influence measurement readings. RESULTS: A significant increase in intraoperative SCBF was found in all dogs (Wilcoxon's signed-rank test; P=.05) immediately after spinal cord decompression and after 15 minutes. Changes in SCBF were not associated with duration of clinical signs; initial or 24-hour neurologic status; or degree of spinal cord compression assessed by MRI. CONCLUSION: SCBF increases immediately after spinal cord decompression in dogs with disk herniation; however, increased SCBF was not associated with a diminished 24-hour neurologic status. CLINICAL RELEVANCE: An increase in SCBF does not appear to be either associated with the degree of spinal cord compression or of a magnitude sufficient to outweigh the benefit of surgical decompression by resulting in clinically relevant changes in 24-hour outcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES The aim was to study the impact of the defect size of endodontically treated incisors compared to dental implants as abutments on the survival of zirconia two-unit anterior cantilever-fixed partial dentures (2U-FPDs) during 10-year simulation. MATERIALS AND METHODS Human maxillary central incisors were endodontically treated and divided into three groups (n = 24): I, access cavities rebuilt with composite core; II, teeth decoronated and restored with composite; and III as II supported by fiber posts. In group IV, implants with individual zirconia abutments were used. Specimens were restored with zirconia 2U-FPDs and exposed to two sequences of thermal cycling and mechanical loading. Statistics: Kaplan-Meier; log-rank tests. RESULTS During TCML in group I two tooth fractures and two debondings with chipping were found. Solely chippings occurred in groups II (2×), IV (2×), and III (1×). No significant different survival was found for the different abutments (p = 0.085) or FPDs (p = 0.526). Load capability differed significantly between groups I (176 N) and III (670 N), and III and IV (324 N) (p < 0.024). CONCLUSION Within the limitations of an in vitro study, it can be concluded that zirconia-framework 2U-FPDs on decoronated teeth with/without post showed comparable in vitro reliability as restorations on implants. The results indicated that restorations on teeth with only access cavity perform worse in survival and linear loading. CLINICAL RELEVANCE Even severe defects do not justify per se a replacement of this particular tooth by a dental implant from load capability point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Drinking eight glasses of fluid or water each day is widely believed to improve health, but evidence is sparse and conflicting. We aimed to investigate the association between fluid consumption and long-term mortality and kidney function. METHODS We conducted a longitudinal analysis within a prospective, population-based cohort study of 3858 men and women aged 49 years or older residing in Australia. Daily fluid intake from food and beverages not including water was measured using a food frequency questionnaire. We did multivariable adjusted Cox proportional hazard models for all-cause and cardiovascular mortality and a boot-strapping procedure for estimated glomerular filtration rate (eGFR). RESULTS Upper and lower quartiles of daily fluid intake corresponded to >3 L and <2 L, respectively. During a median follow-up of 13.1 years (total 43 093 years at risk), 1127 deaths (26.1 per 1000 years at risk) including 580 cardiovascular deaths (13.5 per 1000 years at risk) occurred. Daily fluid intake (per 250 mL increase) was not associated with all-cause [adjusted hazard ratio (HR) 0.99 (95% CI 0.98-1.01)] or cardiovascular mortality [HR 0.98 (95% CI 0.95-1.01)]. Overall, eGFR reduced by 2.2 mL/min per 1.73 m(2) (SD 10.9) in the 1207 (31%) participants who had repeat creatinine measurements and this was not associated with fluid intake [adjusted regression coefficient 0.06 mL/min/1.73 m(2) per 250 mL increase (95% CI -0.03 to 0.14)]. CONCLUSIONS Fluid intake from food and beverages excluding water is not associated with improved kidney function or reduced mortality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES This study sought to determine the vascular anatomical eligibility for catheter-based renal artery denervation (RDN) in hypertensive patients. BACKGROUND Arterial hypertension is the leading cardiovascular risk factor for stroke and mortality globally. Despite substantial advances in drug-based treatment, many patients do not achieve target blood pressure levels. To improve the number of controlled patients, novel procedure- and device-based strategies have been developed. RDN is among the most promising novel techniques. However, there are few data on the vascular anatomical eligibility. METHODS We retrospectively analyzed 941 consecutive hypertensive patients undergoing coronary angiography and selective renal artery angiography between January 1, 2010, and May 31, 2012. Additional renal arteries were divided into 2 groups: hilar (accessory) and polar (aberrant) arteries. Anatomical eligibility for RDN was defined according to the current guidelines: absence of renal artery stenosis, renal artery diameter ≥4 mm, renal artery length ≥20 mm, and only 1 principal renal artery. RESULTS A total of 934 hypertensive patients were evaluable. The prevalence of renal artery stenosis was 10% (n = 90). Of the remaining 844 patients without renal artery stenosis, 727 (86%) had nonresistant hypertension and 117 (14%) had resistant hypertension; 62 (53%) of the resistant hypertensive and 381 (52%) of the nonresistant hypertensive patients were anatomically eligible for sympathetic RDN. CONCLUSIONS The vascular anatomical eligibility criteria of the current guidelines are a major limiting factor for the utilization of RDN as a therapeutic option. Development of new devices and/or techniques may significantly increase the number of candidates for these promising therapeutic options.