881 resultados para Compressed workweek
Resumo:
The cooling process in conventional rotomolding is relatively long due to poor thermal conductivity of plastics. The lack of internal cooling is a major limitation although rapid external cooling is possible. Various internal cooling methodologies have been studied to reduce the cycle time. These include the use of compressed air, cryogenic liquid nitrogen, chilled water coils, and cryogenic liquid carbon dioxide, all of which have limitations. However, this article demonstrates the use of water spray cooling of polymers as a viable and effective method for internal cooling in rotomolding. To this end, hydraulic, pneumatic, and ultrasonic nozzles were applied and evaluated using a specially constructed test rig to assess their efficiency. The effects of nozzle type and different parametric settings on water droplet size, velocity, and mass flow rate were analyzed and their influence on cooling rate, surface quality, and morphology of polymer exposed to spray cooling were characterized. The pneumatic nozzle provided highest average cooling rate while the hydraulic nozzle gave lowest average cooling rate. The ultrasonic nozzle with medium droplet size traveling at low velocity produced satisfactory surface finish. Water spray cooling produced smaller spherulites compared to ambient cooling whilst increasing the cooling rate decreases the percentage crystallinity. © 2011 Society of Plastics Engineers Copyright © 2011 Society of Plastics Engineers.
Resumo:
While the benefits of renewable energy are well known and used to influence government policy there are a number of problems which arise from having significant quantities of renewable energies on an electricity grid. The most notable problem stems from their intermittent nature which is often out of phase with the demands of the end users. This requires the development of either efficient energy storage systems, e.g. battery technology, compressed air storage etc. or through the creation of demand side management units which can utilise power quickly for manufacturing operations. Herein a system performing the conversion of synthetic biogas to synthesis gas using wind power and an induction heating system is shown. This approach demonstrates the feasibility of such techniques for stabilising the electricity grid while also providing a robust means of energy storage. This exemplar is also applicable to the production of hydrogen from the steam reforming of natural gas.
Resumo:
The intricate spatial and energy distribution of magnetic fields, self-generated during high power laser irradiation (at Iλ2∼1013-1014W.cm-2.μm2) of a solid target, and of the heat-carrying electron currents, is studied in inertial confinement fusion (ICF) relevant conditions. This is done by comparing proton radiography measurements of the fields to an improved magnetohydrodynamic description that fully takes into account the nonlocality of the heat transport. We show that, in these conditions, magnetic fields are rapidly advected radially along the target surface and compressed over long time scales into the dense parts of the target. As a consequence, the electrons are weakly magnetized in most parts of the plasma flow, and we observe a reemergence of nonlocality which is a crucial effect for a correct description of the energetics of ICF experiments.
Resumo:
We have resolved the solid-liquid phase transition of carbon at pressures around 150GPa. High-pressure samples of different temperatures were created by laser-driven shock compression of graphite and varying the initial density from 1.30g/cm3 to 2.25g/cm3. In this way, temperatures from 5700K to 14,500K could be achieved for relatively constant pressure according to hydrodynamic simulations. From measuring the elastic X-ray scattering intensity of vanadium K-alpha radiation at 4.95keVat a scattering angle of 126°, which is very sensitive to the solid-liquid transition, we can determine whether the sample had transitioned to the fluid phase. We find that samples of initial density 1.3g/cm3 and 1.85g/cm3 are liquid in the compressed states, whereas samples close to the ideal graphite crystal density of 2.25g/cm3 remain solid, probably in a diamond-like state.
Resumo:
Titanium dioxide coatings have potential applications including photocatalysts for solar assisted hydrogen production, solar water disinfection and self-cleaning windows. Herein, we report the use of suspension plasma spraying (SPS) for the deposition of conformal titanium dioxide coatings. The process utilises a nanoparticle slurry of TiO2 (ca. 6 and 12 nm respectively) in water, which is fed into a high temperature plasma jet (ca. 7000-20 000 K). This facilitated the deposition of adherent coatings of nanostructured titanium dioxide with predominantly anatase crystal structure. In this study, suspensions of nano-titanium dioxide, made via continuous hydrothermal flow synthesis (CHFS), were used directly as a feedstock for the SPS process. Coatings were produced by varying the feedstock crystallite size, spray distance and plasma conditions. The coatings produced exhibited ca. 90-100% anatase phase content with the remainder being rutile (demonstrated by XRD). Phase distribution was homogenous throughout the coatings as determined by micro-Raman spectroscopy. The coatings had a granular surface, with a high specific surface area and consisted of densely packed agglomerates interspersed with some melted material. All of the coatings were shown to be photoactive by means of a sacrificial hydrogen evolution test under UV radiation and compared favourably with reported values for CVD coatings and compressed discs of P25.
Resumo:
The channel-based model of duration perception postulates the existence of neural mechanisms that respond selectively to a narrow range of stimulus durations centred on their preferred duration (Heron et al Proceedings of the Royal Society B 279 690–698). In principle the channel-based model could
explain recent reports of adaptation-induced, visual duration compression effects (Johnston et al Current Biology 16 472–479; Curran and Benton Cognition 122 252–257); from this perspective duration compression is a consequence of the adapting stimuli being presented for a longer duration than the test stimuli. In the current experiment observers adapted to a sequence of moving random dot patterns at the same retinal position, each 340ms in duration and separated by a variable (500–1000ms) interval. Following adaptation observers judged the duration of a 600ms test stimulus at the same location. The test stimulus moved in the same, or opposite, direction as the adaptor. Contrary to the channel-based
model’s prediction, test stimulus duration appeared compressed, rather than expanded, when it moved in the same direction as the adaptor. That test stimulus duration was not distorted when moving in the opposite direction further suggests that visual timing mechanisms are influenced by additional neural processing associated with the stimulus being timed.
Resumo:
Purpose: To assess the bacterial contamination risk in cataract surgery associated with mechanical compression of the lid margin immediately after sterilization of the ocular surface.
Setting: Department of Cataract, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
Design: Prospective randomized controlled double-masked trial.
Methods: Patients with age-related cataract were randomly assigned to 1 of 2 groups. In Group A (153 eyes), the lid margin was compressed and scrubbed for 360 degrees 5 times with a dry sterile cotton-tipped applicator immediately after ocular sterilization and before povidone-iodine irrigation of the conjunctival sac. Group B (153 eyes) had identical sterilization but no lid scrubbing. Samples from the lid margin, liquid in the collecting bag, and aqueous humor were collected for bacterial culture. Primary outcome measures included the rate of positive bacterial culture for the above samples. The species of bacteria isolated were recorded.
Results: Group A and Group B each comprised 153 eyes. The positive rate of lid margin cultures was 54.24%. The positive rate of cultures for liquid in the collecting bag was significantly higher in Group A (23.53%) than in Group B (9.80%) (P=.001).The bacterial species cultured from the collecting bag in Group B were the same as those from the lid margin in Group A. The positive culture rate of aqueous humor in both groups was 0%.
Conclusion: Mechanical compression of the lid margin immediately before and during cataract surgery increased the risk for bacterial contamination of the surgical field, perhaps due to secretions from the lid margin glands.
Financial Disclosure: No author has a financial or proprietary interest in any material or method mentioned.
Resumo:
Collimated transport of fast electron beam through solid density matter is one of the key issues behind the success of the fast ignition scheme by means of which the required amount of ignition energy can be delivered to the hot spot region of the compressed fuel. Here we report on a hot electron beam collimation scheme in solids by tactfully using the strong magnetic fields generated by an electrical resistivity gradient according to Faraday's law. This was accomplished by appropriately fabricating the targets in such a way that the electron beam is directed to flow in a metal which is embedded in a much lower resistivity and atomic number metal. Experimental results showed guided transport of hot electron beam over hundreds of microns length inside solid density plasma, which were obtained from two experiments examining the scheme for petawatt laser driven hot electron beam while employing various target configurations.
Resumo:
O objetivo deste trabalho foi efetuar um estudo de otimização dos parâmetros de exposição do mamógrafo digital da marca GE, modelo Senographe DS instalado no Instituto Português de Oncologia de Coimbra (IPOC) baseado numa análise contraste detalhe e avaliar o impacto do resultado obtido na melhoria da capacidade de detecção clínica das estruturas da mama. O mamógrafo em estudo dispõe de um sistema de controle automático da exposição designado por sistema AOP (Automatic Optimization of Parameters) que foi otimizado pelo fabricante a partir da razão contraste ruído. O sistema AOP é usado na prática clínica na quase totalidade dos exames de mamografia realizados no IPOC. Tendo em conta o tipo de estrutura que se pretende visualizar na mamografia, nomeadamente estruturas de baixo contraste como as massas e estruturas de dimensão submilimétrica como as microcalcificações, a análise contraste detalhe poderia constituir uma abordagem mais adequada para o estudo de otimização dos parâmetros de exposição uma vez que permitiria uma avaliação conjunta do contraste, da resolução espacial e do ruído da imagem. Numa primeira fase do trabalho foi efetuada a caracterização da prática clínica realizada no mamógrafo em estudo em termos de espessura de mama comprimida “típica”, dos parâmetros técnicos de exposição e das opções de processamento das imagens aplicadas pelo sistema AOP (combinação alvo/filtro, tensão aplicada na ampola - kVp e produto corrente-tempo da ampola - mAs). Numa segunda fase foi realizado um estudo de otimização da qualidade da imagem versus dose na perspectiva dos parâmetros físicos. Para tal foi efetuada uma análise contrastedetalhe no objeto simulador de mama CDMAM e usada uma figura de mérito definida a partir do IQFinv (inverted image quality figure) e da dose glandular média. Os resultados apontaram para uma diferença entre o ponto ótimo resultante do estudo de otimização e o ponto associado aos parâmetros de exposição escolhidos pelo sistema AOP, designadamente no caso da mama pequena. Sendo a qualidade da imagem na perspectiva clínica um conceito mais complexo cujo resultado da apreciação de uma imagem de boa qualidade deve ter em conta as diferenças entre observadores, foi efetuado na última parte deste trabalho um estudo do impacto clínico da proposta de otimização da qualidade de imagem. A partir das imagens realizadas com o objeto simulador antropomórfico TOR MAM simulando uma mama pequena, seis médicos(as) radiologistas com mais de 5 anos de experiência em mamografia avaliaram as imagens “otimizadas” obtidas utilizando-se os parâmetros técnicos de exposição resultantes do estudo de otimização e a imagem resultante da escolha do sistema AOP. A análise estatística das avaliações feitas pelos médicos indica que a imagem “otimizada” da mama pequena apresenta uma melhor visualização das microcalcificações sem perda da qualidade da imagem na deteção de fibras e de massas em comparação com a imagem “standard”. Este trabalho permitiu introduzir uma nova definição da figura de mérito para o estudo de otimização da qualidade da imagem versus dose em mamografia. Permitiu também estabelecer uma metodologia consistente que pode facilmente ser aplicada a qualquer outro mamógrafo, contribuindo para a área da otimização em mamografia digital que é uma das áreas mais relevantes no que toca à proteção radiológica do paciente.
Resumo:
Esta tese apresenta algoritmos que estimam o espetro de um sinal não periódico a partir de um numero finito de amostras, tirando partido da esparsidade do sinal no domínio da frequência. Trata-se de um problema em que devido aos efeitos de leakage, o sucesso dos algoritmos tradicionais esta limitado. Para ultrapassar o problema, os algoritmos propostos transformam a base DFT numa frame com um maior numero de colunas, inserindo um numero reduzido de colunas entre algumas das iniciais. Estes algoritmos são baseados na teoria do compressed sensing, que permite obter e representar sinais esparsos e compressíveis, utilizando uma taxa de amostragem muito inferior à taxa de Nyquist. Os algoritmos propostos apresentam um bom desempenho em comparação com os algoritmos existentes, destacando-se na estimação do espetro de sinais compostos por sinusóides com frequências muito próximas, com amplitudes diferentes e na presença de ruído, situação particularmente difícil e perante a qual os restantes algoritmos falham.
Resumo:
Os coeficientes de difusão (D 12) são propriedades fundamentais na investigação e na indústria, mas a falta de dados experimentais e a inexistência de equações que os estimem com precisão e confiança em fases comprimidas ou condensadas constituem limitações importantes. Os objetivos principais deste trabalho compreendem: i) a compilação de uma grande base de dados para valores de D 12 de sistemas gasosos, líquidos e supercríticos; ii) o desenvolvimento e validação de novos modelos de coeficientes de difusão a diluição infinita, aplicáveis em amplas gamas de temperatura e densidade, para sistemas contendo componentes muito distintos em termos de polaridade, tamanho e simetria; iii) a montagem e teste de uma instalação experimental para medir coeficientes de difusão em líquidos e fluidos supercríticos. Relativamente à modelação, uma nova expressão para coeficientes de difusão a diluição infinita de esferas rígidas foi desenvolvida e validada usando dados de dinâmica molecular (desvio relativo absoluto médio, AARD = 4.44%) Foram também estudados os coeficientes de difusão binários de sistemas reais. Para tal, foi compilada uma extensa base de dados de difusividades de sistemas reais em gases e solventes densos (622 sistemas binários num total de 9407 pontos experimentais e 358 moléculas) e a mesma foi usada na validação dos novos modelos desenvolvidos nesta tese. Um conjunto de novos modelos foi proposto para o cálculo de coeficientes de difusão a diluição infinita usando diferentes abordagens: i) dois modelos de base molecular com um parâmetro específico para cada sistema, aplicáveis em sistemas gasosos, líquidos e supercríticos, em que natureza do solvente se encontra limitada a apolar ou fracamente polar (AARDs globais na gama 4.26-4.40%); ii) dois modelos de base molecular biparamétricos, aplicáveis em todos os estados físicos, para qualquer tipo de soluto diluído em qualquer solvente (apolar, fracamente polar e polar). Ambos os modelos dão origem a erros globais entre 2.74% e 3.65%; iii) uma correlação com um parâmetro, específica para coeficientes de difusão em dióxido de carbono supercrítico (SC-CO2) e água líquida (AARD = 3.56%); iv) nove correlações empíricas e semi-empíricas que envolvem dois parâmetros, dependentes apenas da temperatura e/ou densidade do solvente e/ou viscosidade do solvente. Estes últimos modelos são muito simples e exibem excelentes resultados (AARDs entre 2.78% e 4.44%) em sistemas líquidos e supercríticos; e v) duas equações preditivas para difusividades de solutos em SC-CO2, em que os erros globais de ambas são inferiores a 6.80%. No global, deve realçar-se o facto de os novos modelos abrangerem a grande variedade de sistemas e moléculas geralmente encontrados. Os resultados obtidos são consistentemente melhores do que os obtidos com os modelos e abordagens encontrados na literatura. No caso das correlações com um ou dois parâmetros, mostrou-se que estes mesmos parâmetros podem ser ajustados usando um conjunto muito pequeno de dados, e posteriormente serem utilizados na previsão de valores de D 12 longe do conjunto original de pontos. Uma nova instalação experimental para medir coeficientes de difusão binários por técnicas cromatográficas foi montada e testada. O equipamento, o procedimento experimental e os cálculos analíticos necessários à obtenção dos valores de D 12 pelo método de abertura do pico cromatográfico, foram avaliados através da medição de difusividades de tolueno e acetona em SC-CO2. Seguidamente, foram medidos coeficientes de difusão de eucaliptol em SC-CO2 nas gamas de 202 – 252 bar e 313.15 – 333.15 K. Os resultados experimentais foram analisados através de correlações e modelos preditivos para D12.
Resumo:
Compressed sensing is a new paradigm in signal processing which states that for certain matrices sparse representations can be obtained by a simple l1-minimization. In this thesis we explore this paradigm for higher-dimensional signal. In particular three cases are being studied: signals taking values in a bicomplex algebra, quaternionic signals, and complex signals which are representable by a nonlinear Fourier basis, a so-called Takenaka-Malmquist system.
Resumo:
La compression des données est la technique informatique qui vise à réduire la taille de l’information pour minimiser l’espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d’un problème que nous appelons la redondance causée par la multiplicité d’encodages. La multiplicité d’encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu’une technique de compression a la possibilité, au cours du processus d’encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d’environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n’a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c’est-à-dire qu’elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s’appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu’ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d’améliorer l’efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s’appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l’adaptation du HuBR à l’ACBR. Nous présentons aussi l’analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l’aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c’est qu’elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l’utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d’obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d’obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d’un dictionnaire pluriellement analysable. En outre, nous montrons qu’ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l’algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d’ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l’algorithme de Knuth.
Resumo:
We introduce a quality controlled observational atmospheric, snow, and soil data set from Snoqualmie Pass, Washington, U.S.A., to enable testing of hydrometeorological and snow process representations within a rain-snow transitional climate where existing observations are sparse and limited. Continuous meteorological forcing (including air temperature, total precipitation, wind speed, specific humidity, air pressure, short- and longwave irradiance) are provided at hourly intervals for a 24-year historical period (water years 1989-2012) and at half-hourly intervals for a more-recent period (water years 2013-2015), separated based on the availability of observations. Additional observations include 40-years of snow board new snow accumulation, multiple measurements of total snow depth, and manual snow pits, while more recent years include sub-daily surface temperature, snowpack drainage, soil moisture and temperature profiles, and eddy co-variance derived turbulent heat flux. This data set is ideal for testing hypotheses about energy balance, soil and snow processes in the rain-snow transition zone. Plots of live data can be found here: http://depts.washington.edu/mtnhydr/cgi/plot.cgi
Resumo:
Face recognition from images or video footage requires a certain level of recorded image quality. This paper derives acceptable bitrates (relating to levels of compression and consequently quality) of footage with human faces, using an industry implementation of the standard H.264/MPEG-4 AVC and the Closed-Circuit Television (CCTV) recording systems on London buses. The London buses application is utilized as a case study for setting up a methodology and implementing suitable data analysis for face recognition from recorded footage, which has been degraded by compression. The majority of CCTV recorders on buses use a proprietary format based on the H.264/MPEG-4 AVC video coding standard, exploiting both spatial and temporal redundancy. Low bitrates are favored in the CCTV industry for saving storage and transmission bandwidth, but they compromise the image usefulness of the recorded imagery. In this context, usefulness is determined by the presence of enough facial information remaining in the compressed image to allow a specialist to recognize a person. The investigation includes four steps: (1) Development of a video dataset representative of typical CCTV bus scenarios. (2) Selection and grouping of video scenes based on local (facial) and global (entire scene) content properties. (3) Psychophysical investigations to identify the key scenes, which are most affected by compression, using an industry implementation of H.264/MPEG-4 AVC. (4) Testing of CCTV recording systems on buses with the key scenes and further psychophysical investigations. The results showed a dependency upon scene content properties. Very dark scenes and scenes with high levels of spatial–temporal busyness were the most challenging to compress, requiring higher bitrates to maintain useful information.