881 resultados para Compressed workweek


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Communication has become an essential function in our civilization. With the increasing demand for communication channels, it is now necessary to find ways to optimize the use of their bandwidth. One way to achieve this is by transforming the information before it is transmitted. This transformation can be performed by several techniques. One of the newest of these techniques is the use of wavelets. Wavelet transformation refers to the act of breaking down a signal into components called details and trends by using small waveforms that have a zero average in the time domain. After this transformation the data can be compressed by discarding the details, transmitting the trends. In the receiving end, the trends are used to reconstruct the image. In this work, the wavelet used for the transformation of an image will be selected from a library of available bases. The accuracy of the reconstruction, after the details are discarded, is dependent on the wavelets chosen from the wavelet basis library. The system developed in this thesis takes a 2-D image and decomposes it using a wavelet bank. A digital signal processor is used to achieve near real-time performance in this transformation task. A contribution of this thesis project is the development of DSP-based test bed for the future development of new real-time wavelet transformation algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The correct distance perception is important for executing various interactive tasks such as navigation, selection and manipulation. It is known, however, that, in general, there is a significant distance perception compression in virtual environments, mainly when using Head-Mounted Displays - HMDs. This perceived distance compression may bring various problems to the applications and even affect in a negative way the utility of those applications that depends on the correct judgment of distances. The scientific community, so far, have not been able to determine the causes of the distance perception compression in virtual environments. For this reason, it was the objective of this work to investigate, through experiments with users, the influence of both the field-of-view - FoV - and the distance estimation methods on this perceived compression. For that, an experimental comparison between the my3D device and a HMD, using 32 participants, seeking to find information on the causes of the compressed perception, was executed. The results showed that the my3D has inferior capabilities when compared to the HMD, resulting in worst estimations, on average, in both the tested estimation methods. The causes of that are believed to be the incorrect stimulus of the peripheral vision of the user, the smaller FoV and the smaller immersion sense, as described by the participants of the experiment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From an economic standpoint, the powder metallurgy (P/M) is a technique widely used for the production of small parts. It is possible, through the P/M and prior comminution of solid waste such as ferrous chips, produce highly dense sintered parts and of interest to the automotive, electronics and aerospace industries. However, without prior comminution the chip, the production of bodies with a density equal to theoretical density by conventional sintering techniques require the use of additives or significantly higher temperatures than 1250ºC. An alternative route to the production of sintered bodies with high density compaction from ferrous chips (≤ 850 microns) and solid phase sintering is a compression technique under high pressure (HP). In this work, different compaction pressures to produce a sintered chip of SAE 1050 carbon steel were used. Specifically, the objective was to investigate them, the effect of high pressure compression in the behavior of densification of the sintered samples. Therefore, samples of the chips from the SAE 1050 carbon steel were uniaxially cold compacted at 500 and 2000 MPa, respectively. The green compacts obtained were sintered under carbon atmosphere at 1100 and 1200°C for 90 minutes. The heating rate used was 20°C/min. The starting materials and the sintered bodies were characterized by optical microscopy, SEM, XRD, density measurements (geometric: mass/volume, and pycnometry) and microhardness measurements Vickers and Rockwell hardness. The results showed that the compact produced under 2000 MPa presented relative density values between 93% and 100% of theoretical density and microhardness between 150 HV and 180 HV, respectively. In contrast, compressed under 500 MPa showed a very heterogeneous microstructure, density value below 80% of theoretical density and structural conditions of inadequate specimens for carrying out the hardness and microhardness measurements. The results indicate that use of the high pressure of ferrous chips compression is a promising route to improve the sinterability conditions of this type of material, because in addition to promoting greater compression of the starting material, the external tension acts together with surface tension, functioning as the motive power for sintering process. Additionally, extremely high pressures allow plastic deformation of the material, providing an intimate and extended contact of the particles and eliminating cracks and pores. This tends to reduce the time and / or temperature required for good sintering, avoiding excessive grain growth without the use of additives. Moreover, higher pressures lead to fracture the grains in fragile or ductile materials highly hardened, which provides a starting powder for sintering, thinner, without the risk of contamination present when previous methods are used comminution of the powder.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mining in Brazil has a key role in economic and social development, contributing directly to improve the lives of the population. However, the mining activity even if done responsibly and with a proper study of waste management to reduce the impact of its effects, may cause harmful damage to the environment. Other forms of pollution are also caused due to mining activity. The visual pollution caused by the waste storage at open sky, in addition to the noise pollution caused by the excessive noise of the machines both in the extraction of ore, as in processing. An alternative way to lessen the environmental impacts caused by mining is the use of waste in layers that will compose the pavements along the highways. Thus, this work sets out to give a proper disposal of the wastes from the processing of iron ore, resulting from the mining activity of the group of mining Mhag Services and Mining S/A, in the mine of Bonito, located in Jucurutu, a town in the state of Rio Grande do Norte. The residues of the iron ore were stabilized with a granular soil from the city of Macaiba, also in the state of Rio Grande do Norte, which is being used in the duplication of the BR-304 referring to the entitled passage of Reta Tabajara. The present work was developed in three stages, being the first one divided by the chemical and mineralogical tests, by the tests of physical characterization and by the tests of paving for the residues of the iron ore. The second stage corresponds to the same tests being performed for granular soil. The third stage includes the essays abovementioned for three different mixtures of iron ore waste and granular soil, being they: 15% of iron-ore waste and 85% of granular soil, 25% of iron-ore waste and 75% of granular soil, 50% of iron-ore waste and 50% of granular soil. The technical feasibility of using waste from the iron ore beneficiation was checked, compressed in the intermediate energy and modified for use in base layers, sub-base, reinforcement subgrade and subgrade. The incorporation of the residues originating from the improvement of the iron ore in highways will provide an alternative to the use of aggregate conventionally used in the paving, besides preserving the environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Telehealth Brazil Networks Program, created in 2007 with the aim of strengthening primary care and the unified health system (SUS - Sistema Único de Saúde), uses information and communication technologies for distance learning activities related to health. The use of technology enables the interaction between health professionals and / or their patients, furthering the ability of Family Health Teams (FHT). The program is grounded in law, which determines a number of technologies, protocols and processes which guide the work of Telehealth nucleus in the provision of services to the population. Among these services is teleconsulting, which is registered consultation and held between workers, professionals and managers of healthcare through bidirectional telecommunication instruments, in order to answer questions about clinical procedures, health actions and questions on the dossier of work. With the expansion of the program in 2011, was possible to detect problems and challenges that cover virtually all nucleus at different scales for each region. Among these problems can list the heterogeneity of platforms, especially teleconsulting, and low internet coverage in the municipalities, mainly in the interior cities of Brazil. From this perspective, the aim of this paper is to propose a distributed architecture, using mobile computing to enable the sending of teleconsultation. This architecture works offline, so that when internet connection data will be synchronized with the server. This data will travel on compressed to reduce the need for high transmission rates. Any Telehealth Nucleus can use this architecture, through an external service, which will be coupled through a communication interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The sudden hydrocarbon influx from the formation into the wellbore poses a serious risk to the safety of the well. This sudden influx is termed a kick, which, if not controlled, may lead to a blowout. Therefore, early detection of the kick is crucial to minimize the possibility of a blowout occurrence. There is a high probability of delay in kick detection, apart from other issues when using a kick detection system that is exclusively based on surface monitoring. Down-hole monitoring techniques have a potential to detect a kick at its early stage. Down-hole monitoring could be particularly beneficial when the influx occurs as a result of a lost circulation scenario. In a lost circulation scenario, when the down-hole pressure becomes lower than the formation pore pressure, the formation fluid may starts to enter the wellbore. The lost volume of the drilling fluid is compensated by the formation fluid flowing into the well bore, making it difficult to identify the kick based on pit (mud tank) volume observations at the surface. This experimental study investigates the occurrence of a kick based on relative changes in the mass flow rate, pressure, density, and the conductivity of the fluid in the down-hole. Moreover, the parameters that are most sensitive to formation fluid are identified and a methodology to detect a kick without false alarms is reported. Pressure transmitter, the Coriolis flow and density meter, and the conductivity sensor are employed to observe the deteriorating well conditions in the down-hole. These observations are used to assess the occurrence of a kick and associated blowout risk. Monitoring of multiple down-hole parameters has a potential to improve the accuracy of interpretation related to kick occurrence, reduces the number of false alarms, and provides a broad picture of down-hole conditions. The down-hole monitoring techniques have a potential to reduce the kick detection period. A down-hole assembly of the laboratory scale drilling rig model and kick injection setup were designed, measuring instruments were acquired, a frame was fabricated, and the experimental set-up was assembled and tested. This set-up has the necessary features to evaluate kick events while implementing down-hole monitoring techniques. Various kick events are simulated on the drilling rig model. During the first set of experiments compressed air (which represents the formation fluid) is injected with constant pressure margin. In the second set of experiments the compressed air is injected with another pressure margin. The experiments are repeated with another pump (flow) rate as well. This thesis consists of three main parts. The first part gives the general introduction, motivation, outline of the thesis, and a brief description of influx: its causes, various leading and lagging indicators, and description of the several kick detection systems that are in practice in the industry. The second part describes the design and construction of the laboratory scale down-hole assembly of the drilling rig and kick injection setup, which is used to implement the proposed methodology for early kick detection. The third part discusses the experimental work, describes the methodology for early kick detection, and presents experimental results that show how different influx events affect the mass flow rate, pressure, conductivity, and density of the fluid in the down-hole, and the discussion of the results. The last chapter contains summary of the study and future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent evidence suggests that the Subtropical Convergence (STC) zone east of New Zealand shifted little from its modern position along Chatham Rise during the last glaciation, and that offshore surface waters north of the STC zone cooled only slightly. However, at nearshore core site P69 (2195 m depth), 115 km off the east coast of North Island and ca 300 km north of the modern STC zone, planktonic foraminiferal species, transfer function data and stable oxygen and carbon isotope records suggest that surface waters were colder by up to 6°C during the late last glacial period compared to the Holocene, and included a strong upwelling signature. Presently site P69 is bathed by south-flowing subtropical waters in the East Cape Current. The nearshore western end of Chatham Rise supports a major bathymetric depression, the Mernoo Saddle, through which some exchange between northern subtropical and southern subantarctic water presently occurs. It is proposed that as a result of much intensified current flows south of the Rise during the last glaciation, a consequence of more compressed subantarctic water masses, lowered sea level, and an expanded and stronger Westerly Wind system, there was accelerated leakage northwards of both Australasian Subantarctic Water and upwelled Antarctic Intermediate Water over Mernoo Saddle in a modified and intensified Southland Current. The expanded cold water masses displaced the south-flowing warm East Cape Current off southeastern North Island, and offshore divergence was accompanied by wind-assisted upwelling of nutrient-rich waters in the vicinity of P69. A comparable kind of inshore cold water jetting possibly characterised most glacial periods since the latest Miocene, and may account for the occasional occurrence of subantarctic marine fossils in onland late Cenozoic deposits north of the STC zone, rather than invoking wholesale major oscillations of the oceanic STC itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a new, simple approach to enhance the spectral compression process arising from nonlinear pulse propagation in an optical fiber. We numerically show that an additional sinusoidal temporal phase modulation of the pulse enables efficient reduction of the intensity level of the side lobes in the spectrum that are produced by the mismatch between the initial linear negative chirp of the pulse and the self-phase modulation-induced nonlinear positive chirp. Remarkable increase of both the extent of spectrum narrowing and the quality of the compressed spectrum is afforded by the proposed approach across a wide range of experimentally accessible parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We numerically investigate a fiber laser which contains an active fiber along with a dispersion decreasing fiber both operating at normal dispersion. Large-bandwidth pulses are obtained that can be linearly compressed resulting in ultra-short high-energy pulse generation. ©2010 Crown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of my Ph.D. thesis is to enhance the visualization of the peripheral retina using wide-field optical coherence tomography (OCT) in a clinical setting.

OCT has gain widespread adoption in clinical ophthalmology due to its ability to visualize the diseases of the macula and central retina in three-dimensions, however, clinical OCT has a limited field-of-view of 300. There has been increasing interest to obtain high-resolution images outside of this narrow field-of-view, because three-dimensional imaging of the peripheral retina may prove to be important in the early detection of neurodegenerative diseases, such as Alzheimer's and dementia, and the monitoring of known ocular diseases, such as diabetic retinopathy, retinal vein occlusions, and choroid masses.

Before attempting to build a wide-field OCT system, we need to better understand the peripheral optics of the human eye. Shack-Hartmann wavefront sensors are commonly used tools for measuring the optical imperfections of the eye, but their acquisition speed is limited by their underlying camera hardware. The first aim of my thesis research is to create a fast method of ocular wavefront sensing such that we can measure the wavefront aberrations at numerous points across a wide visual field. In order to address aim one, we will develop a sparse Zernike reconstruction technique (SPARZER) that will enable Shack-Hartmann wavefront sensors to use as little as 1/10th of the data that would normally be required for an accurate wavefront reading. If less data needs to be acquired, then we can increase the speed at which wavefronts can be recorded.

For my second aim, we will create a sophisticated optical model that reproduces the measured aberrations of the human eye. If we know how the average eye's optics distort light, then we can engineer ophthalmic imaging systems that preemptively cancel inherent ocular aberrations. This invention will help the retinal imaging community to design systems that are capable of acquiring high resolution images across a wide visual field. The proposed model eye is also of interest to the field of vision science as it aids in the study of how anatomy affects visual performance in the peripheral retina.

Using the optical model from aim two, we will design and reduce to practice a clinical OCT system that is capable of imaging a large (800) field-of-view with enhanced visualization of the peripheral retina. A key aspect of this third and final aim is to make the imaging system compatible with standard clinical practices. To this end, we will incorporate sensorless adaptive optics in order to correct the inter- and intra- patient variability in ophthalmic aberrations. Sensorless adaptive optics will improve both the brightness (signal) and clarity (resolution) of features in the peripheral retina without affecting the size of the imaging system.

The proposed work should not only be a noteworthy contribution to the ophthalmic and engineering communities, but it should strengthen our existing collaborations with the Duke Eye Center by advancing their capability to diagnose pathologies of the peripheral retinal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen has been called the fuel of the future, and as it’s non- renewable counterparts become scarce the economic viability of hydrogen gains traction. The potential of hydrogen is marked by its high mass specific energy density and wide applicability as a fuel in fuel cell vehicles and homes. However hydrogen’s volume must be reduced via pressurization or liquefaction in order to make it more transportable and volume efficient. Currently the vast majority of industrially produced hydrogen comes from steam reforming of natural gas. This practice yields low-pressure gas which must then be compressed at considerable cost and uses fossil fuels as a feedstock leaving behind harmful CO and CO2 gases as a by-product. The second method used by industry to produce hydrogen gas is low pressure electrolysis. In comparison the electrolysis of water at low pressure can produce pure hydrogen and oxygen gas with no harmful by-products using only water as a feedstock, but it will still need to be compressed before use. Multiple theoretical works agree that high pressure electrolysis could reduce the energy losses due to product gas compression. However these works openly admit that their projected gains are purely theoretical and ignore the practical limitations and resistances of a real life high pressure system. The goal of this work is to experimentally confirm the proposed thermodynamic gains of ultra-high pressure electrolysis in alkaline solution and characterize the behavior of a real life high pressure system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.

In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.

Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.

Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.

Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.

To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.

The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.

This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The area west of the Antarctic Peninsula is a key region for studying and understanding the history of glaciation in the southern high latitudes during the Neogene with respect to variations of the western Antarctic continental ice sheet, variable sea-ice cover, induced eustatic sea level change, as well as consequences for the global climatic system (Barker, Camerlenghi, Acton, et al., 1999). Sites 1095, 1096, and 1101 were drilled on sediment drifts forming the continental rise to examine the nature and composition of sediments deposited under the influence of the Antarctic Peninsula ice sheet, which has repeatedly advanced to the shelf edge and subsequently released glacially eroded material on the continental shelf and slope (Barker et al., 1999). Mass gravity processes on the slope are responsible for downslope sediment transport by turbidity currents within a channel system between the drifts. Furthermore, bottom currents redistribute the sediments, which leads to final build up of drift bodies (Rebesco et al., 1998). The high-resolution sedimentary sequences on the continental rise can be used to document the variability of continental glaciation and, therefore, allow us to assess the main factors that control the sediment transport and the depositional processes during glaciation periods and their relationship to glacio-eustatic sea level changes. Site 1095 lies in 3840 m of water in a distal position on the northwestern lower flank of Drift 7, whereas Site 1096 lies in 3152 m of water in a more proximal position within Drift 7. Site 1101 is located at 3509 m water depth on the northwestern flank of Drift 4. All three sites have high sedimentation rates. The oldest sediments were recovered at Site 1095 (late Miocene; 9.7 Ma), whereas sediments of Pliocene age were recovered at Site 1096 (4.7 Ma) and at Site 1101 (3.5 Ma). The purpose of this work is to provide a data set of bulk sediment parameters such as CaCO3, total organic carbon (TOC), and coarse-fraction mass percentage (>63 µm) measured on the sediments collected from the continental rise of the western Antarctic Peninsula (Holes 1095A, 1095B, 1096A, 1096B, 1096C, and 1101A). This information can be used to understand the complex depositional processes and their implication for variations in the climatic system of the western Pacific Antarctic margin since 9.7 Ma (late Miocene). Coarse-fraction particles (125-500 µm) from the late Pliocene and Pleistocene (4.0 Ma to recent) sediments recovered from Hole 1095A were microscopically analyzed to gather more detailed information about their variability and composition through time. These data can yield information about changes in potential source regions of the glacially eroded material that has been transported during repeated periods of ice-sheet movements on the shelf.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A recently developed novel biomass fuel pellet, the Q’ Pellet, offers significant improvements over conventional white pellets, with characteristics comparable to those of coal. The Q’ Pellet was initially created at bench scale using a proprietary die and punch design, in which the biomass was torrefied in-situ¬ and then compressed. To bring the benefits of the Q’ Pellet to a commercial level, it must be capable of being produced in a continuous process at a competitive cost. A prototype machine was previously constructed in a first effort to assess continuous processing of the Q’ Pellet. The prototype torrefied biomass in a separate, ex-situ reactor and transported it into a rotary compression stage. Upon evaluation, parts of the prototype were found to be unsuccessful and required a redesign of the material transport method as well as the compression mechanism. A process was developed in which material was torrefied ex-situ and extruded in a pre-compression stage. The extruded biomass overcame multiple handling issues that had been experienced with un-densified biomass, facilitating efficient material transport. Biomass was extruded directly into a novel re-designed pelletizing die, which incorporated a removable cap, ejection pin and a die spring to accommodate a repeatable continuous process. Although after several uses the die required manual intervention due to minor design and manufacturing quality limitations, the system clearly demonstrated the capability of producing the Q’ Pellet in a continuous process. Q’ Pellets produced by the pre-compression method and pelletized in the re-designed die had an average dry basis gross calorific value of 22.04 MJ/kg, pellet durability index of 99.86% and dried to 6.2% of its initial mass following 24 hours submerged in water. This compares well with literature results of 21.29 MJ/kg, 100% pellet durability index and <5% mass increase in a water submersion test. These results indicate that the methods developed herein are capable of producing Q’ Pellets in a continuous process with fuel properties competitive with coal.