947 resultados para Compressed pile
Resumo:
Hydrogen has been called the fuel of the future, and as it’s non- renewable counterparts become scarce the economic viability of hydrogen gains traction. The potential of hydrogen is marked by its high mass specific energy density and wide applicability as a fuel in fuel cell vehicles and homes. However hydrogen’s volume must be reduced via pressurization or liquefaction in order to make it more transportable and volume efficient. Currently the vast majority of industrially produced hydrogen comes from steam reforming of natural gas. This practice yields low-pressure gas which must then be compressed at considerable cost and uses fossil fuels as a feedstock leaving behind harmful CO and CO2 gases as a by-product. The second method used by industry to produce hydrogen gas is low pressure electrolysis. In comparison the electrolysis of water at low pressure can produce pure hydrogen and oxygen gas with no harmful by-products using only water as a feedstock, but it will still need to be compressed before use. Multiple theoretical works agree that high pressure electrolysis could reduce the energy losses due to product gas compression. However these works openly admit that their projected gains are purely theoretical and ignore the practical limitations and resistances of a real life high pressure system. The goal of this work is to experimentally confirm the proposed thermodynamic gains of ultra-high pressure electrolysis in alkaline solution and characterize the behavior of a real life high pressure system.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.
Resumo:
The area west of the Antarctic Peninsula is a key region for studying and understanding the history of glaciation in the southern high latitudes during the Neogene with respect to variations of the western Antarctic continental ice sheet, variable sea-ice cover, induced eustatic sea level change, as well as consequences for the global climatic system (Barker, Camerlenghi, Acton, et al., 1999). Sites 1095, 1096, and 1101 were drilled on sediment drifts forming the continental rise to examine the nature and composition of sediments deposited under the influence of the Antarctic Peninsula ice sheet, which has repeatedly advanced to the shelf edge and subsequently released glacially eroded material on the continental shelf and slope (Barker et al., 1999). Mass gravity processes on the slope are responsible for downslope sediment transport by turbidity currents within a channel system between the drifts. Furthermore, bottom currents redistribute the sediments, which leads to final build up of drift bodies (Rebesco et al., 1998). The high-resolution sedimentary sequences on the continental rise can be used to document the variability of continental glaciation and, therefore, allow us to assess the main factors that control the sediment transport and the depositional processes during glaciation periods and their relationship to glacio-eustatic sea level changes. Site 1095 lies in 3840 m of water in a distal position on the northwestern lower flank of Drift 7, whereas Site 1096 lies in 3152 m of water in a more proximal position within Drift 7. Site 1101 is located at 3509 m water depth on the northwestern flank of Drift 4. All three sites have high sedimentation rates. The oldest sediments were recovered at Site 1095 (late Miocene; 9.7 Ma), whereas sediments of Pliocene age were recovered at Site 1096 (4.7 Ma) and at Site 1101 (3.5 Ma). The purpose of this work is to provide a data set of bulk sediment parameters such as CaCO3, total organic carbon (TOC), and coarse-fraction mass percentage (>63 µm) measured on the sediments collected from the continental rise of the western Antarctic Peninsula (Holes 1095A, 1095B, 1096A, 1096B, 1096C, and 1101A). This information can be used to understand the complex depositional processes and their implication for variations in the climatic system of the western Pacific Antarctic margin since 9.7 Ma (late Miocene). Coarse-fraction particles (125-500 µm) from the late Pliocene and Pleistocene (4.0 Ma to recent) sediments recovered from Hole 1095A were microscopically analyzed to gather more detailed information about their variability and composition through time. These data can yield information about changes in potential source regions of the glacially eroded material that has been transported during repeated periods of ice-sheet movements on the shelf.
Resumo:
This paper describes the design, tuning, and extensive field testing of an admittance-based Autonomous Loading Controller (ALC) for robotic excavation. Several iterations of the ALC were tuned and tested in fragmented rock piles—similar to those found in operating mines—by using both a robotic 1-tonne capacity Kubota R520S diesel-hydraulic surface loader and a 14-tonne capacity Atlas Copco ST14 underground load-haul-dump (LHD) machine. On the R520S loader, the ALC increased payload by 18 % with greater consistency, although with more energy expended and longer dig times when compared with digging at maximum actuator velocity. On the ST14 LHD, the ALC took 61 % less time to load 39 % more payload when compared to a single manual operator. The manual operator made 28 dig attempts by using three different digging strategies, and had one failed dig. The tuned ALC made 26 dig attempts at 10 and 11 MN target force levels. All 10 11 MN digs succeeded while 6 of the 16 10 MN digs failed. The results presented in this paper suggest that the admittance-based ALC is more productive and consistent than manual operators, but that care should be taken when detecting entry into the muck pile
Resumo:
A recently developed novel biomass fuel pellet, the Q’ Pellet, offers significant improvements over conventional white pellets, with characteristics comparable to those of coal. The Q’ Pellet was initially created at bench scale using a proprietary die and punch design, in which the biomass was torrefied in-situ¬ and then compressed. To bring the benefits of the Q’ Pellet to a commercial level, it must be capable of being produced in a continuous process at a competitive cost. A prototype machine was previously constructed in a first effort to assess continuous processing of the Q’ Pellet. The prototype torrefied biomass in a separate, ex-situ reactor and transported it into a rotary compression stage. Upon evaluation, parts of the prototype were found to be unsuccessful and required a redesign of the material transport method as well as the compression mechanism. A process was developed in which material was torrefied ex-situ and extruded in a pre-compression stage. The extruded biomass overcame multiple handling issues that had been experienced with un-densified biomass, facilitating efficient material transport. Biomass was extruded directly into a novel re-designed pelletizing die, which incorporated a removable cap, ejection pin and a die spring to accommodate a repeatable continuous process. Although after several uses the die required manual intervention due to minor design and manufacturing quality limitations, the system clearly demonstrated the capability of producing the Q’ Pellet in a continuous process. Q’ Pellets produced by the pre-compression method and pelletized in the re-designed die had an average dry basis gross calorific value of 22.04 MJ/kg, pellet durability index of 99.86% and dried to 6.2% of its initial mass following 24 hours submerged in water. This compares well with literature results of 21.29 MJ/kg, 100% pellet durability index and <5% mass increase in a water submersion test. These results indicate that the methods developed herein are capable of producing Q’ Pellets in a continuous process with fuel properties competitive with coal.
Resumo:
This study investigates the effect of foam core density and skin type on the behaviour of sandwich panels as structural beams tested in four-point bending and axially compressed columns of varying slenderness and skin thickness. Bio-composite unidirectional flax fibre-reinforced polymer (FFRP) is compared to conventional glass-FRP (GFRP) as the skin material used in conjunction with three polyisocyanurate (PIR) foam cores with densities of 32, 64 and 96 kg/m3. Eighteen 1000 mm long flexural specimens were fabricated and tested to failure comparing the effects of foam core density between three-layer FFRP skinned and single-layer GFRP skinned panels. A total of 132 columns with slenderness ratios (kLe/r) ranging from 22 to 62 were fabricated with single-layer GFRP skins, and one-, three-, and five-layer FFRP skins for each of the three foam core densities. The columns were tested to failure in concentric axial compression using pinned-end conditions to compare the effects of each material type and panel height. All specimens had a foam core cross-section of 100x50 mm with 100 mm wide skins of equal thickness. In both flexural and axial loading, panels with skins comprised of three FFRP layers showed equivalent strength to those with a single GFRP layer for all slenderness ratios and core densities examined. Doubling the core density from 32 to 64 kg/m3 and tripling the density to 96 kg/m3 led to flexural strength increases of 82 and 213%, respectively. Both FFRP and GFRP columns showed a similar variety of failure modes related to slenderness. Low slenderness of 22-25 failed largely due to localized single skin buckling, while those with high slenderness of 51-61 failed primarily by global buckling followed by secondary skin buckling. Columns with intermediate slenderness experienced both localized and global failure modes. High density foam cores more commonly exhibited core shear failure. Doubling the core density of the columns resulted in peak axial load increases, across all slenderness ratios, of 73, 56, 72 and 71% for skins with one, three and five FFRP layers, and one GFRP layer, respectively. Tripling the core density resulted in respective peak load increases of 116, 130, 176 and 170%.
Resumo:
Multi-frequency eddy current measurements are employed in estimating pressure tube (PT) to calandria tube (CT) gap in CANDU fuel channels, a critical inspection activity required to ensure fitness for service of fuel channels. In this thesis, a comprehensive characterization of eddy current gap data is laid out, in order to extract further information on fuel channel condition, and to identify generalized applications for multi-frequency eddy current data. A surface profiling technique, generalizable to multiple probe and conductive material configurations has been developed. This technique has allowed for identification of various pressure tube artefacts, has been independently validated (using ultrasonic measurements), and has been deployed and commissioned at Ontario Power Generation. Dodd and Deeds solutions to the electromagnetic boundary value problem associated with the PT to CT gap probe configuration were experimentally validated for amplitude response to changes in gap. Using the validated Dodd and Deeds solutions, principal components analysis (PCA) has been employed to identify independence and redundancies in multi-frequency eddy current data. This has allowed for an enhanced visualization of factors affecting gap measurement. Results of the PCA of simulation data are consistent with the skin depth equation, and are validated against PCA of physical experiments. Finally, compressed data acquisition has been realized, allowing faster data acquisition for multi-frequency eddy current systems with hardware limitations, and is generalizable to other applications where real time acquisition of large data sets is prohibitive.
Resumo:
In June 2015, legal frameworks of the Asian Infrastructural Investment Bank were signed by its 57 founding members. Proposed and initiated by China, this multilateral development bank is considered to be an Asian counterpart to break the monopoly of the World Bank and the International Monetary Fund. In October 2015, China’s Central Bank announced a benchmark interest rate cut to combat the economic slowdown. The easing policy coincides with the European Central Bank’s announcement of doubts over US Fed’s commitment to raise interest rates. Global stock markets responded positively to China’s move, with the exception of the indexes from Wall Street (Bland, 2015; Elliott, 2015). In the meantime, China’s ‘One Belt, One Road’ (or New Silk Road Economic Belt) became atopic of discourse in relation to its growing global economy, as China pledged $40 billion to trade and infrastructure projects (Bermingham, 2015). The foreign policy aims to reinforce the economic belt from western China through Central Asia towards Europe, as well as to construct maritime trading routes from coastal China through the South China Sea (Summers, 2015). In 2012, The Economist launched a new China section, to reveal the complexity of the‘meteoric rise’ of China. John Micklethwait, who was then the chief editor of the magazine, said that China’s emergence as a global power justified giving it a section of its own(Roush, 2012). In July 2015, Hu Shuli, the former chief editor of Caijing, announced the launch of a think tank and financial data service division called Caixin Insight Group, which encompasses the new Caixin China Purchasing Managers Index (PMI). Incooperation with with Markit Group, a principal global provider of PMI, the index soon became a widely cited economic indicator. One anecdote from November’s Caixin shows how much has changed: in a high-profile dialogue between Hu Shuli and Kevin Rudd, Hu insisted on asking questions in English; interestingly, the former Prime Minister of Australia insisted on replying in Chinese. These recent developments point to one thing: the economic ascent of China and its increasing influence on the power play between economics and politics in world markets. China has begun to take a more active role in rule making and enforcement under neoliberal frameworks. However, due to the country’s size and the scale of its economy in comparison to other countries, China’s version of globalisation has unique characteristics. The ‘Capitalist-socialist’ paradox is vital to China’s market-oriented transformation. In order to comprehend how such unique features are articulated and understood, there are several questions worth investigating in the realms of media and communication studies,such as how China’s neoliberal restructuring is portrayed and perceived by different types of interested parties, and how these portrayals are de-contextualised and re-contextualised in global or Anglo-American narratives. Therefore, based on a combination of the themes of globalisation, financial media and China’s economic integration, this thesis attempts to explore how financial media construct the narratives of China’s economic globalisation through the deployment of comparative and multi-disciplinary approaches. Two outstanding elite financial magazines, Britain’s The Economist, which has a global readership and influence, and Caijing, China’s leading financial magazine, are chosen as case studies to exemplify differing media discourses, representing, respectively, Anglo-American and Chinese socio-economic and political backgrounds, as well as their own journalistic cultures. This thesis tries to answer the questions of how and why China’s neoliberal restructuring is constructed from a globally-oriented perspective. The construction primarily involves people who are influential in business and policymaking. Hence, the analysis falls into the paradigm of elite-elite communication, which is an important but relatively less developed perspective in studying China and its globalisation. The comparing of characteristics of narrative construction are the result of the textual analysis of articles published over a ten-year period (mid-1998 to mid-2008). The corpus of samples come from the two media outlets’ coverage of three selected events:China becoming a member of the World Trade Organization, its outward direct investment, and the listing of stocks of Chinese companies in overseas exchanges, which are mutually exclusive in sample collection and collectively exhaustive in the inclusion of articles regarding China’s economic globalisation. The findings help to understand that, despite language, socio-economic and political differences, elite financial media with globally-oriented readerships share similar methods of and approaches to agenda setting, the evaluation of news prominence, the selection of frame, and the advocacy of deeply rooted neoliberal ideas. The comparison of their distinctive features reflects the different phases of building up the sense of identity in their readers as global elites, as well as the different economic interests that are aligned with the corresponding readerships. However, textual analysis is only relevant in terms of exploring how the narratives are constructed and the elements they include; textual analysis alone prevents us from seeing the obstacles and the constrains of the journalistic practices of construction. Therefore, this thesis provides a brief discussion of interviews with practitioners from the two media, in order to understand how similar or different narratives are manifested and perceived, how the concept of neoliberalism deviates from and is justified in the Chinese context, and how and for what purpose deviations arise from Western to Chinese contexts. The thesis also contributes to defining financial media in the domain of elite communication. The relevant and closely interlocking concepts of globalisation, elitism and neoliberalism are discussed, and are used as a theoretical bedrock in the analysis of texts and contexts. It is important to address the agenda-setting and ideological role of elite financial media, because of its narrative formula of infusing business facts with opinions,which is important in constructing the global elite identity as well as influencing neoliberal policy-making. On the other hand, ‘journalistic professionalism’ has been redefined, in that the elite identity is shared by the content producer, reader and the actors in the news stories emerging from the much-compressed news cycle. The professionalism of elite financial media requires a dual definition, that of being professional in the understanding of business facts and statistics, and that of being professional in the making sense of stories by deploying economic logic.
Resumo:
Estudiosos de todo el mundo se están centrando en el estudio del fenómeno de las ciudades inteligentes. La producción bibliográfica española sobre este tema ha crecido exponencialmente en los últimos años. Las nuevas ciudades inteligentes se fundamentan en nuevas visiones de desarrollo urbano que integran múltiples soluciones tecnológicas ligadas al mundo de la información y de la comunicación, todas ellas actuales y al servicio de las necesidades de la ciudad. La literatura en español sobre este tema proviene de campos tan diferentes como la Arquitectura, la Ingeniería, las Ciencias Políticas y el Derecho o las Ciencias Empresariales. La finalidad de las ciudades inteligentes es la mejora de la vida de sus ciudadanos a través de la implementación de tecnologías de la información y de la comunicación que resuelvan las necesidades de sus habitantes, por lo que los investigadores del campo de las Ciencias de la Comunicación y de la Información tienen mucho que decir. Este trabajo analiza un total de 120 textos y concluye que el fenómeno de las ciudades inteligentes será uno de los ejes centrales de la investigación multidisciplinar en los próximos años en nuestro país.
Resumo:
The expansion of a magnetized high-pressure plasma into a low-pressure ambient medium is examined with particle-in-cell simulations. The magnetic field points perpendicular to the plasma's expansion direction and binary collisions between particles are absent. The expanding plasma steepens into a quasi-electrostatic shock that is sustained by the lower-hybrid (LH) wave. The ambipolar electric field points in the expansion direction and it induces together with the background magnetic field a fast E cross B drift of electrons. The drifting electrons modify the background magnetic field, resulting in its pile-up by the LH shock. The magnetic pressure gradient force accelerates the ambient ions ahead of the LH shock, reducing the relative velocity between the ambient plasma and the LH shock to about the phase speed of the shocked LH wave, transforming the LH shock into a nonlinear LH wave. The oscillations of the electrostatic potential have a larger amplitude and wavelength in the magnetized plasma than in an unmagnetized one with otherwise identical conditions. The energy loss to the drifting electrons leads to a noticeable slowdown of the LH shock compared to that in an unmagnetized plasma.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08