953 resultados para Peak-to-average Ratio (par)
Resumo:
Sediment traps were deployed inside the anoxic inner basin of Effingham Inlet and at the oxygenated mouth of the inlet from May 1999 to September 2000 in a pilot study to determine the annual depositional cycle and impact of the 1999-2000 La Niña event within a western Canadian inlet facing the open Pacific Ocean. Total mass flux, geochemical parameters (carbon, nitrogen, opal, major and minor element contents, and stable isotope ratios) and diatom assemblages were determined and compared with meteorological and oceanographic data. Deposition was seasonal, with coarser grained terrestrial components and benthic diatoms settling in the autumn and winter, coincident with the rainy season. Marine sedimentary components and abundant pelagic diatoms were coincident with coastal upwelling in the spring and summer. Despite the seasonal differences in deposition, the typical temperate-zone Thalassiosira-Skeletonema-Chaetoceros bloom succession was muted. A July 1999 total mass flux peak and an increase in biogenous components coincided with a rare bottom-water oxygen renewal event in the inlet. Likewise, there were cooler-than-average sea surface temperatures (SSTs) just outside the inlet, and unusually high abundances of a previously undescribed cool-water marine diatom (Fragilariopsis pacifica sp. nov.) within the inlet. Each of these occurrences likely reflects a response to the strong La Niña that followed the year after the strongest-ever recorded El Niño event of 1997-1998. By the autumn of 1999, SSTs had returned to average, and F. pacifica had all but disappeared from the remaining trap record, indicating that oceanographic conditions had returned to normal. Oxygenation events were not witnessed in the inlet in the years before or after 1999, suggesting that a rare oceanographic and climatic event was captured by this sediment trap time series. The data from this record can therefore be used as a benchmark for identifying anomalous environmental conditions on this coast.
Resumo:
The capacity of the East Asian seaweed Gracilaria vermiculophylla ("Ogonori") for production of prostaglandin E2 from arachidonic acid occasionally causes food poisoning after ingestion. During the last two decades the alga has been introduced to Europe and North America. Non-native populations have been shown to be generally less palatable to marine herbivores than native populations. We hypothesized that the difference in palatability among populations could be due to differences in the algal content of prostaglandins. We therefore compared the capacity for wound-activated production of prostaglandins and other eicosatetraenoid oxylipins among five native populations in East Asia and seven non-native populations in Europe and NW Mexico, using a targeted metabolomics approach. In two independent experiments non-native populations exhibited a significant tendency to produce more eicosatetraenoids than native populations after acclimation to identical conditions and subsequent artificial wounding. Fourteen out of 15 eicosatetraenoids that were detected in experiment I and all 19 eicosatetraenoids that were detected in experiment II reached higher mean concentrations in non-native than in native specimens. The datasets generated in both experiments are contained in http://doi.pangaea.de/10.1594/PANGAEA.855008. Wounding of non-native specimens resulted on average in 390 % more 15-keto-PGE2, in 90 % more PGE2, in 37 % more PGA2 and in 96 % more 7,8-di-hydroxy eicosatetraenoic acid than wounding of native specimens. The dataset underlying this statement is contained in http://doi.pangaea.de/10.1594/PANGAEA.854847. Not only PGE2, but also PGA2 and dihydroxylated eicosatetraenoic acid are known to deter various biological enemies of G. vermiculophylla that cause tissue or cell wounding, and in the present study the latter two compounds also repelled the mesograzer Littorina brevicula. The dataset underlying this statement is contained in http://doi.pangaea.de/10.1594/PANGAEA.854922. Non-native populations of G. vermiculophylla are thus more defended against herbivory than native populations. This increased capacity for activated chemical defense may have contributed to their invasion success and at the same time it poses an elevated risk for human food safety.
Resumo:
This paper presents a scientific and technical description of the modelling framework and the main results of modelling the long-term average sediment delivery at hillslope to medium-scale catchments over the entire Murray Darling Basin (MDB). A theoretical development that relates long-term averaged sediment delivery to the statistics of rainfall and catchment parameters is presented. The derived flood frequency approach was adapted to investigate the problem of regionalization of the sediment delivery ratio (SDR) across the Basin. SDR, a measure of catchment response to the upland erosion rate, was modeled by two lumped linear stores arranged in series: hillslope transport to the nearest streams and flow routing in the channel network. The theory shows that the ratio of catchment sediment residence time (SRT) to average effective rainfall duration is the most important control in the sediment delivery processes. In this study, catchment SRTs were estimated using travel time for overland flow multiplied by an enlargement factor which is a function of particle size. Rainfall intensity and effective duration statistics were regionalized by using long-term measurements from 195 pluviograph sites within and around the Basin. Finally, the model was implemented across the MDB by using spatially distributed soil, vegetation, topographical and land use properties under Geographic Information System (GIs) environment. The results predict strong variations in SDR from close to 0 in floodplains to 70% in the eastern uplands of the Basin. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: Tissue Doppler strain rate imaging (SRI) have been validated and applied in various clinical settings, but the clinical use of this modality is still limited due to time-consuming postprocessing, unfavorable signal to noise ratio and major angle dependency of image acquisition. 2D Strain (2DS) measures strain parameters through automated tissue tracking (Lagrangian strain) rather than tissue velocity regression. We sought to compare the accuracy of this technique with SRI and evaluate whether it overcomes the above limitations. Methods: We assessed 26 patients (13 female, age 60±5yrs) at low risk of CAD and with normal DSE at both baseline and peak stress. End systolic strain (ESS), peak systolic strain rate (SR), and timing parameters were measured by two independent observers using SRI and 2D Strain. Myocardial segments were excluded from the analyses if the insonation angle exceeded 30 degrees or if the segments were not visualized; 417 segments were evaluated. Results: Normal ranges for TVI and CEB approaches were comparable for SR (-0.99 ± 0.39 vs -0.88 ± 0.36, p=NS), ESS (-15.1 ± 6.5 vs -14.9 ± 6.3, p=NS), time to end of systole (174 ± 47 vs 174 ± 53, p=NS) and time to peak SR (TTP; 340 ± 34 vs 375 ± 57). The best correlations between the techniques were for time to end systole (rest r=0.6, p
Resumo:
This study characterizes the visually evoked magnetic response (VEMR) to pattern onset/offset stimuli, using a single channel BTi magnetometer. The influence of stimulus parameters and recording protocols on the VEMR is studied with inferences drawn about the nature of cortical processing, its origins and optimal recording strategies. Fundamental characteristics are examined, such as the behaviour of successive averaged and unaveraged responses; the effects of environmental shielding; averaging; inter- and intrasubject variability and equipment specificity. The effects of varying check size, field size, contrast and refractive error on latency, amplitude and topographic distribution are also presented. Latency and amplitude trends are consistent with previous VEP findings and known anatomical properties of the visual system. Topographic results are consistent with the activity of sources organised according to the cruciform model of striate cortex. A striate origin for the VEMR is also suggested by the results to quarter, octant and annulus field stimuli. Similarities in the behaviour and origins of the sources contributing to the CIIm and CIIIm onset peaks are presented for a number of stimulus conditions. This would be consistent with differing processing event in the same, or similar neuronal populations. Focal field stimuli produce less predictable responses than full or half fields, attributable to a reduced signal to noise ratio and an increased sensitivity to variations in cortical morphology. Problems with waveform peak identification are encountered for full field stimuli that can only be resolved by the careful choice of stimulus parameters, comparisons with half field responses or with reference to the topographic distribution of each waveform peak. An anatomical study of occipital lobe morphology revealed large inter- and intrasubject variation in calcarine fissure shape and striate cortex distribution. An appreciation of such variability is important for VEMR interpretation, due to the technique's sensitivity to source depth and orientation, and it is used to explain the experimental results obtained.
Resumo:
We study the statistics of optical data transmission in a noisy nonlinear fiber channel with a weak dispersion management and zero average dispersion. Applying analytical expressions for the output probability density functions both for a nonlinear channel and for a linear channel with additive and multiplicative noise we calculate in a closed form a lower bound estimate on the Shannon capacity for an arbitrary signal-to-noise ratio.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Noha az 1990-es évek első felében felbomlott az akadémiai közgazdászok sok évtizeden át örök érvényűnek hitt közmegegyezése a minimálbér szükségképpen negatív foglalkoztatási hatásáról, a túlságosan magas minimálbért minden közgazdász foglalkoztatáscsökkentő hatásúnak jósolja. Tanulmányunkban a magyar minimálbér-szabályozást e hatás szempontjából vizsgáljuk és értékeljük. / === / Although the long-held view of an unambiguously negative employment effect of a binding minimum wage was challenged by empirical findings in the early 1990’s, it is unanimously predicted that if the minimum wage is set too high it will bring about adverse employment effects. Accordingly, our study starts from an evaluation of the magnitude of the Hungarian minimum wage, i.e., of how it relates to minimum wage rates elsewhere, and of how it has developed through time. Next we inspect the main features that characterize the Hungarian system of minimum wage regulation. Theoretical views on the potential employment effect of minimum wage regulation are then surveyed and contrasted to empirical findings. The study concludes by policy recommendations. To sum up the main strand of arguments, we try to demonstrate that even though Hungary’s minimum wage, if assessed by its ratio to average and/or median full-time earnings, does not appear particularly high by international standards, it might rightly be regarded as unreasonably high in light of Hungary’s excessively low relative rate of employment among the least schooled. This diagnose should become particularly evident once one takes into account that, in sharp contrast to established rules elsewhere, a significantly higher wage floor is in effect for those with lower secondary schooling. Abolition of this legally guaranteed premium over the minimum wage as well as more moderation in minimum wage adjustments are thus highly recommended.
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
At four sites in the central equatorial Pacific Ocean the flux of extraterrestrial 3He, determined using the excess 230Th profiling method, is 8 * 10**-13 cm**3 STP/cm**2/ka. This supply rate is constant to within 30%. At these same sites, however, the burial rate of 3He, determined using chronostratigraphic accumulation rates, varies by more than a factor of 3. The lowest burial rates, which occur north of the equator at 1°N, 139°W are lower than the global average rate of supply of extraterrestrial 3He by 20% and indicate that sediment winnowing may have occurred. The highest burial rates, which are recorded at the equator and at 2°S, are higher than the rate of supply of extraterrestrial 3He by 100%, and these provide evidence for sediment focusing. By analyzing several proxies measured in core PC72 sediments spanning the past 450 kyr we demonstrate that periods of maximum burial rates of 230Th, 3He, 10Be, Ti, and barite, with a maximum peak-to-trough amplitude of a factor of 6, take place systematically during glacial time. However, the ratio of any one proxy to another is constant to within 30% over the entire length of the records. Given that each proxy represents a different source (234U decay in seawater, interplanetary dust, upper atmosphere, continental dust, or upper ocean), our preferred interpretation for the covariation is that the climate-related changes in burial rates are driven by changes in sediment focusing.
Resumo:
Measurements of the stable isotopic composition (dD(H2) or dD) of atmospheric molecular hydrogen (H2) are a useful addition to mixing ratio (X(H2)) measurements for understanding the atmospheric H2 cycle. dD datasets published so far consist mostly of observations at background locations. We complement these with observations from the Cabauw tall tower at the CESAR site, situated in a densely populated region of the Netherlands. Our measurements show a large anthropogenic influence on the local H2 cycle, with frequently occurring pollution events that are characterized by X(H2) values that reach up to 1 ppm and low dD values. An isotopic source signature analysis yields an apparent source signature below -400 per mil, which is much more D-depleted than the fossil fuel combustion source signature commonly used in H2 budget studies. Two diurnal cycles that were sampled at a suburban site near London also show a more D-depleted source signature (-340 per mil), though not as extremely depleted as at Cabauw. The source signature of the Northwest European vehicle fleet may have shifted to somewhat lower values due to changes in vehicle technology and driving conditions. Even so, the surprisingly depleted apparent source signature at Cabauw requires additional explanation; microbial H2 production seems the most likely cause. The Cabauw tower site also allowed us to sample vertical profiles. We found no decrease in (H2) at lower sampling levels (20 and 60m) with respect to higher sampling levels (120 and 200m). There was a significant shift to lower median dD values at the lower levels. This confirms the limited role of soil uptake around Cabauw, and again points to microbial H2 production during an extended growing season, as well as to possible differences in average fossil fuel combustion source signature between the different footprint areas of the sampling levels. So, although knowledge of the background cycle of H2 has improved over the last decade, surprising features come to light when a non-background location is studied, revealing remaining gaps in our understanding.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Le béton conventionnel (BC) a de nombreux problèmes tels que la corrosion de l’acier d'armature et les faibles résistances des constructions en béton. Par conséquent, la plupart des structures fabriquées avec du BC exigent une maintenance fréquent. Le béton fibré à ultra-hautes performances (BFUP) peut être conçu pour éliminer certaines des faiblesses caractéristiques du BC. Le BFUP est défini à travers le monde comme un béton ayant des propriétés mécaniques, de ductilité et de durabilité supérieures. Le BFUP classique comprend entre 800 kg/m³ et 1000 kg/m³ de ciment, de 25 à 35% massique (%m) de fumée de silice (FS), de 0 à 40%m de poudre de quartz (PQ) et 110-140%m de sable de quartz (SQ) (les pourcentages massiques sont basés sur la masse totale en ciment des mélanges). Le BFUP contient des fibres d'acier pour améliorer sa ductilité et sa résistance aux efforts de traction. Les quantités importantes de ciment utilisées pour produire un BFUP affectent non seulement les coûts de production et la consommation de ressources naturelles comme le calcaire, l'argile, le charbon et l'énergie électrique, mais affectent également négativement les dommages sur l'environnement en raison de la production substantielle de gaz à effet de serre dont le gas carbonique (CO[indice inférieur 2]). Par ailleurs, la distribution granulométrique du ciment présente des vides microscopiques qui peuvent être remplis avec des matières plus fines telles que la FS. Par contre, une grande quantité de FS est nécessaire pour combler ces vides uniquement avec de la FS (25 à 30%m du ciment) ce qui engendre des coûts élevés puisqu’il s’agit d’une ressource limitée. Aussi, la FS diminue de manière significative l’ouvrabilité des BFUP en raison de sa surface spécifique Blaine élevée. L’utilisation du PQ et du SQ est également coûteuse et consomme des ressources naturelles importantes. D’ailleurs, les PQ et SQ sont considérés comme des obstacles pour l’utilisation des BFUP à grande échelle dans le marché du béton, car ils ne parviennent pas à satisfaire les exigences environnementales. D’ailleurs, un rapport d'Environnement Canada stipule que le quartz provoque des dommages environnementaux immédiats et à long terme en raison de son effet biologique. Le BFUP est généralement vendu sur le marché comme un produit préemballé, ce qui limite les modifications de conception par l'utilisateur. Il est normalement transporté sur de longues distances, contrairement aux composantes des BC. Ceci contribue également à la génération de gaz à effet de serre et conduit à un coût plus élevé du produit final. Par conséquent, il existe le besoin de développer d’autres matériaux disponibles localement ayant des fonctions similaires pour remplacer partiellement ou totalement la fumée de silice, le sable de quartz ou la poudre de quartz, et donc de réduire la teneur en ciment dans BFUP, tout en ayant des propriétés comparables ou meilleures. De grandes quantités de déchets verre ne peuvent pas être recyclées en raison de leur fragilité, de leur couleur, ou des coûts élevés de recyclage. La plupart des déchets de verre vont dans les sites d'enfouissement, ce qui est indésirable puisqu’il s’agit d’un matériau non biodégradable et donc moins respectueux de l'environnement. Au cours des dernières années, des études ont été réalisées afin d’utiliser des déchets de verre comme ajout cimentaire alternatif (ACA) ou comme granulats ultrafins dans le béton, en fonction de la distribution granulométrique et de la composition chimique de ceux-ci. Cette thèse présente un nouveau type de béton écologique à base de déchets de verre à ultra-hautes performances (BEVUP) développé à l'Université de Sherbrooke. Les bétons ont été conçus à l’aide de déchets verre de particules de tailles variées et de l’optimisation granulaire de la des matrices granulaires et cimentaires. Les BEVUP peuvent être conçus avec une quantité réduite de ciment (400 à 800 kg/m³), de FS (50 à 220 kg/m³), de PQ (0 à 400 kg/m³), et de SQ (0-1200 kg/m³), tout en intégrant divers produits de déchets de verre: du sable de verre (SV) (0-1200 kg/m³) ayant un diamètre moyen (d[indice inférieur 50]) de 275 µm, une grande quantité de poudre de verre (PV) (200-700 kg/m³) ayant un d50 de 11 µm, une teneur modérée de poudre de verre fine (PVF) (50-200 kg/m³) avec d[indice inférieur] 50 de 3,8 µm. Le BEVUP contient également des fibres d'acier (pour augmenter la résistance à la traction et améliorer la ductilité), du superplastifiants (10-60 kg/m³) ainsi qu’un rapport eau-liant (E/L) aussi bas que celui de BFUP. Le remplacement du ciment et des particules de FS avec des particules de verre non-absorbantes et lisse améliore la rhéologie des BEVUP. De plus, l’utilisation de la PVF en remplacement de la FS réduit la surface spécifique totale nette d’un mélange de FS et de PVF. Puisque la surface spécifique nette des particules diminue, la quantité d’eau nécessaire pour lubrifier les surfaces des particules est moindre, ce qui permet d’obtenir un affaissement supérieur pour un même E/L. Aussi, l'utilisation de déchets de verre dans le béton abaisse la chaleur cumulative d'hydratation, ce qui contribue à minimiser le retrait de fissuration potentiel. En fonction de la composition des BEVUP et de la température de cure, ce type de béton peut atteindre des résistances à la compression allant de 130 à 230 MPa, des résistances à la flexion supérieures à 20 MPa, des résistances à la traction supérieure à 10 MPa et un module d'élasticité supérieur à 40 GPa. Les performances mécaniques de BEVUP sont améliorées grâce à la réactivité du verre amorphe, à l'optimisation granulométrique et la densification des mélanges. Les produits de déchets de verre dans les BEVUP ont un comportement pouzzolanique et réagissent avec la portlandite générée par l'hydratation du ciment. Cependant, ceci n’est pas le cas avec le sable de quartz ni la poudre de quartz dans le BFUP classique, qui réagissent à la température élevée de 400 °C. L'addition des déchets de verre améliore la densification de l'interface entre les particules. Les particules de déchets de verre ont une grande rigidité, ce qui augmente le module d'élasticité du béton. Le BEVUP a également une très bonne durabilité. Sa porosité capillaire est très faible, et le matériau est extrêmement résistant à la pénétration d’ions chlorure (≈ 8 coulombs). Sa résistance à l'abrasion (indice de pertes volumiques) est inférieure à 1,3. Le BEVUP ne subit pratiquement aucune détérioration aux cycles de gel-dégel, même après 1000 cycles. Après une évaluation des BEVUP en laboratoire, une mise à l'échelle a été réalisée avec un malaxeur de béton industriel et une validation en chantier avec de la construction de deux passerelles. Les propriétés mécaniques supérieures des BEVUP a permis de concevoir les passerelles avec des sections réduites d’environ de 60% par rapport aux sections faites de BC. Le BEVUP offre plusieurs avantages économiques et environnementaux. Il réduit le coût de production et l’empreinte carbone des structures construites de béton fibré à ultra-hautes performances (BFUP) classique, en utilisant des matériaux disponibles localement. Il réduit les émissions de CO[indice inférieur 2] associées à la production de clinkers de ciment (50% de remplacement du ciment) et utilise efficacement les ressources naturelles. De plus, la production de BEVUP permet de réduire les quantités de déchets de verre stockés ou mis en décharge qui causent des problèmes environnementaux et pourrait permettre de sauver des millions de dollars qui pourraient être dépensés dans le traitement de ces déchets. Enfin, il offre une solution alternative aux entreprises de construction dans la production de BFUP à moindre coût.
Resumo:
Background: Intussusception represents as the invagination of a part of the intestine into itself and is the most common cause of intestinal obstruction in infants and children between 6 months to 3-years-old. Objectives: The objective of this study was to determine the recurrence rate and predisposing factors of recurrent intussusception. Patients and Methods: The medical records of children aged less than 13-years-old with confirmed intussusception who underwent reduction at a tertiary academic care in northern Iran (Mazandran), from 2001 to 2013 were reviewed. Data were extracted and recurrence rate was determined. The two groups were compared by chi square, Fisher, Mann-Whitney and t-test. Diagnosed cases of intussusception consisted of 237 children. Results: Average age of the patients was 19.57 ± 19.43 months with a peak of 3 to 30 months. Male to female ratio was 1.65 and this increased by aging. Recurrence rate was 16% (38 cases). 87 (36.7%) underwent surgery. These were mainly children under one year old. In 71% (40) of episodes recurrence occurred 1 to 7 times within 6 months. The recurrence occurred in 29 (23.5%) children in whom a first reduction was achieved with barium enema (BE) and 5 (5.7%) children who had an operative reduction (P < 0.001) in the first episode. Pathological leading points (PLPs) were observed in 5 cases; 2.6% in recurrence group versus 2% in non-recurrence group (P = 0.91). Three patients had intestinal polyp, 2 patient’s lymphoma and Mackle’s diverticulum. Age (P = 0.77) and sex (P = 0.38) showed no difference between the two groups. PLPs were observed in 1.4% of children aged 3 months to 5 years. This was 13.3%, in older children (P = 0.02). Conclusions: The recurrence of intussusception was related to the method of treatment in the first episode and it was 5-fold higher in children with BE than in operative reduction. Recurrent intussusceptions were not associated with PLPs, they were more idiopathic.