791 resultados para Stream computing
Resumo:
Since the turn of the century, tributaries to the Missouri River in western Iowa have entrenched their channels to as much as six times their original depth. This channel degradation is accompanied by widening as the channel side slopes become unstable and landslides occur. The deepening and widening of these streams have endangered about 25% of the highway bridges in 13 counties [Lohnes et al. 1980]. Grade stabilization structures have been recommended as the most effective remedial measure for stream degradation [Brice et al., 1978]. In western Iowa, within the last seven years, reinforced concrete grade stabilization structures have cost between $300,000 and $1,200,000. Recognizing that the high cost of these structures may be prohibitive in many situations, the Iowa Department of Transportation (Iowa DOT) sponsored a study at Iowa State University (ISU) to find low-cost alternative structures. This was Phase I of the stream degradation study. Analytical and laboratory work led to the conclusion that alternative construction materials such as gabions and soil-cement might result in more economical structures [Lohnes et al. 1980]. The ISU study also recommended that six experimental structures be built and their performance evaluated. Phase II involved the design of the demonstration structures, and Phase III included monitoring and evaluating their performance.
Resumo:
Since the beginning of channel straightening at the turn of the century, the streams of western Iowa have degraded 1.5 to 5 times their original depth. This vertical degradation is often accompanied by increases in channel widths of 2 to 4 times the original widths. The deepening and widening of these streams has jeopardized the structural safety of many bridges by undercutting footings or pile caps, exposing considerable length of piling, and removing soil beneath and adjacent to abutments. Various types of flume and drop structures have been introduced in an effort to partially or totally stabilize these channels, protecting or replacing bridge structures. Although there has always been a need for economical grade stabilization structures to stop stream channel degradation and protect highway bridges and culverts, the problem is especially critical at the present time due to rapidly increasing construction costs and decreasing revenues. Benefits derived from stabilization extend beyond the transportation sector to the agricultural sector, and increased public interest and attention is needed.
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
Motivation: Genome-wide association studies have become widely used tools to study effects of genetic variants on complex diseases. While it is of great interest to extend existing analysis methods by considering interaction effects between pairs of loci, the large number of possible tests presents a significant computational challenge. The number of computations is further multiplied in the study of gene expression quantitative trait mapping, in which tests are performed for thousands of gene phenotypes simultaneously. Results: We present FastEpistasis, an efficient parallel solution extending the PLINK epistasis module, designed to test for epistasis effects when analyzing continuous phenotypes. Our results show that the algorithm scales with the number of processors and offers a reduction in computation time when several phenotypes are analyzed simultaneously. FastEpistasis is capable of testing the association of a continuous trait with all single nucleotide polymorphism ( SNP) pairs from 500 000 SNPs, totaling 125 billion tests, in a population of 5000 individuals in 29, 4 or 0.5 days using 8, 64 or 512 processors.
Resumo:
Actualment en el sector industrial, les organitzacions tenen el repte d'optimitzar els seus sistemes productius per a millorar en quant a preu, qualitat i nivell de servei i poder adaptar-se a les exigències dels clients (excel·lència productiva). El present anàlisi, es basa en l'optimització d'una cadena de producció de feltres insonoritzants per a l'automòbil a través de l'eliminació de les pèrdues existents (operacions que no aporten valor afegit al producte final). Per dur-ho a terme, la metodologia emprada és el Value Stream Map (VSM). El VSM és una tècnica desenvolupada sota el model de gestió de la producció Lean Manufacturing, molt visual i entenedora, permet visualitzar i entendre l'estat actual d'un procés. Aquesta, abarca a tota la organització, i te per objectiu recolzar-la en el procés de redisseny dels seus entorns productius per assolir un estat futur millor que possibiliti obtenir resultats en un periode curt de temps. L'objectiu principal de l'estudi, és aplicar l'eina VSM com a mètode per a l'eliminació de les mudes o malbarataments que impedeix la consecució d'una cadena Lean amb el cas concret d'un sistema productiu de feltres insonoritzants. En la primera part del projecte s'introdueix al lector en la teoria del pensament Lean (quins principis té i quins són els objectius) com a marc teòric. Aquí es detalla el procediment, així com les característiques per a la correcta elaboració del VSM actual, per al seu corresponent anàlisi i per a la seva representació del estat futur. En una segona part del projecte, s'exposen les etapes que constitueixen la cadena de producció d'estudi i es duu a terme l'elaboració del Value Stream Map, on es posen de manifest les ineficiències del flux que conformen la línia de producció. Per últim s'analitzen els fluxes, s'identifiquen les pèrdues de la cadena, i a partir d'aquests, es dissenyen i es proposen projectes i accions que permitin establir línies d'actuació per a un millor estat futur. L'estudi ha permés demostrar la validesa del VSM com a eina per a facilitar la consecució i assoliment de millores en la productivitat, competitivitat i rendibilitat dels diferents processos de l'organització en la línia de fabricació de feltres insonoritzats.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
Natural selection drives local adaptation, potentially even at small temporal and spatial scales. As a result, adaptive genetic and phenotypic divergence can occur among populations living in different habitats. We investigated patterns of differentiation between contrasting lake and stream habitats in the cyprinid fish European minnow (Phoxinus phoxinus) at both the morphological and genomic levels using geometric morphometrics and AFLP markers, respectively. We also used a spatial correlative approach to identify AFLP loci associated with environmental variables representing potential selective forces responsible for adaptation to divergent habitats. Our results identified different morphologies between lakes and streams, with lake fish presenting a deeper body and caudal peduncle compared to stream fish. Body shape variation conformed to a priori predictions concerning biomechanics and swimming performance in lakes vs. streams. Moreover, morphological differentiation was found to be associated with several environmental variables, which could impose selection on body and caudal peduncle shape. We found adaptive genetic divergence between these contrasting habitats in the form of 'outlier' loci (2.9%) whose genetic divergence exceeded neutral expectations. We also detected additional loci (6.6%) not associated with habitat type (lake vs. stream), but contributing to genetic divergence between populations. Specific environmental variables related to trophic dynamics, landscape topography and geography were associated with several neutral and outlier loci. These results provide new insights into the morphological divergence and genetic basis of adaptation to differentiated habitats.
Resumo:
Summary of stream water quality collected in 2013.
Resumo:
Summary of stream water quality data collected from 2000 through 2013.
Resumo:
Summary of stream water quality data collected in 2014.
Resumo:
Summary of stream water quality data collected from 2000 through 2014.
Resumo:
We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.
Resumo:
The purpose of this manual is to provide guidelines for low water stream crossings (LWSC). Rigid criteria for determining the applicability of a LWSC to a given site are not established nor is a 'cookbook" procedure for designing a LWSC presented. Because conditions vary from county to county and from site to site within the county, judgment must be applied to the suggestions contained in this manual. A LWSC is a stream crossing that will be flooded periodically and closed to traffic. Carstens (1981) has defined a LWSC as "a ford, vented ford (one having some number of culvert pipes), low water bridge, or other structure that is designed so that its hydraulic capacity will be insufficient one or more times during a year of normal rainfall." In this manual, LWSC are subdivided into these same three main types: unvented fords, vented fords and low water bridges. Within the channel banks, an unvented ford can have its road profile coincident with the stream bed or can have its profile raised some height above the stream bed.