945 resultados para Compression ignition (CI)
Resumo:
Wydział Nauk Geograficznych i Geologicznych
Resumo:
Wydział Historyczny
Resumo:
Poster zaprezentowany na XIII Krajowym Forum Informacji Naukowej i Technicznej w Zakopanem.
Resumo:
The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.
Resumo:
A model of telescoping is proposed that assumes no systematic errors in dating. Rather, the overestimation of recent occurrences of events is based on the combination of three factors: (1) Retention is greater for recent events; (2) errors in dating, though unbiased, increase linearly with the time since the dated event; and (3) intrusions often occur from events outside the period being asked about, but such intrusions do not come from events that have not yet occurred. In Experiment 1, we found that recall for colloquia fell markedly over a 2-year interval, the magnitude of errors in psychologists' dating of the colloquia increased at a rate of .4 days per day of delay, and the direction of the dating error was toward the middle of the interval. In Experiment 2, the model used the retention function and dating errors from the first study to predict the distribution of the actual dates of colloquia recalled as being within a 5-month period. In Experiment 3, the findings of the first study were replicated with colloquia given by, instead of for, the subjects.
Resumo:
We have explored isotropically jammed states of semi-2D granular materials through cyclic compression. In each compression cycle, systems of either identical ellipses or bidisperse disks transition between jammed and unjammed states. We determine the evolution of the average pressure P and structure through consecutive jammed states. We observe a transition point ϕ_{m} above which P persists over many cycles; below ϕ_{m}, P relaxes slowly. The relaxation time scale associated with P increases with packing fraction, while the relaxation time scale for collective particle motion remains constant. The collective motion of the ellipses is hindered compared to disks because of the rotational constraints on elliptical particles.
Resumo:
The compression properties of octave-spanning supercontinuum spectra generated in photonic crystal fibers are studied using stochastic nonlinear Schrödinger equation simulations. The conditions under which sub-5 fs pulses can be obtained after compression are identified. © 2004 Optical Society of America.
Resumo:
Fractal image compression is a relatively recent image compression method. Its extension to a sequence of motion images is important in video compression applications. There are two basic fractal compression methods, namely the cube-based and the frame-based methods, being commonly used in the industry. However there are advantages and disadvantages in both methods. This paper proposes a hybrid algorithm highlighting the advantages of the two methods in order to produce a good compression algorithm for video industry. Experimental results show the hybrid algorithm improves the compression ratio and the quality of decompressed images.
Resumo:
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.
Resumo:
Fractal image compression is a relatively recent image compression method, which is simple to use and often leads to a high compression ratio. These advantages make it suitable for the situation of a single encoding and many decoding, as required in video on demand, archive compression, etc. There are two fundamental fractal compression methods, namely, the cube-based and the frame-based methods, being commonly studied. However, there are advantages and disadvantages in both methods. This paper gives an extension of the fundamental compression methods based on the concept of adaptive partition. Experimental results show that the algorithms based on adaptive partition may obtain a much higher compression ratio compared to algorithms based on fixed partition while maintaining the quality of decompressed images.
Resumo:
The intrinsic independent features of the optimal codebook cubes searching process in fractal video compression systems are examined and exploited. The design of a suitable parallel algorithm reflecting the concept is presented. The Message Passing Interface (MPI) is chosen to be the communication tool for the implementation of the parallel algorithm on distributed memory parallel computers. Experimental results show that the parallel algorithm is able to reduce the compression time and achieve a high speed-up without changing the compression ratio and the quality of the decompressed image. A scalability test was also performed, and the results show that this parallel algorithm is scalable.
Resumo:
The authors' experience in the treatment of grey video compression using fractals is summarized and compared with other research in the same field. Experience with parallel and distributed computing is also discussed.
Resumo:
Aircraft fuselages are complex assemblies of thousands of components and as a result simulation models are highly idealised. In the typical design process, a coarse FE model is used to determine loads within the structure. The size of the model and number of load cases necessitates that only linear static behaviour is considered. This paper reports on the development of a modelling approach to increase the accuracy of the global model, accounting for variations in stiffness due to non-linear structural behaviour. The strategy is based on representing a fuselage sub-section with a single non-linear element. Large portions of fuselage structure are represented by connecting these non-linear elements together to form a framework. The non-linear models are very efficient, reducing computational time significantly