948 resultados para Algebraic decoding
Resumo:
Kirjallisuusarvostelu
Resumo:
I doktorsavhandlingen undersöks förmågan att lösa hos ett antal lösare för optimeringsproblem och ett antal svårigheter med att göra en rättvis lösarjämförelse avslöjas. Dessutom framläggs några förbättringar som utförts på en av lösarna som heter GAMS/AlphaECP. Optimering innebär, i det här sammanhanget, att finna den bästa möjliga lösningen på ett problem. Den undersökta klassen av problem kan karaktäriseras som svårlöst och förekommer inom ett flertal industriområden. Målet har varit att undersöka om det finns en lösare som är universellt snabbare och hittar lösningar med högre kvalitet än någon av de andra lösarna. Det kommersiella optimeringssystemet GAMS (General Algebraic Modeling System) och omfattande problembibliotek har använts för att jämföra lösare. Förbättringarna som presenterats har utförts på GAMS/AlphaECP lösaren som baserar sig på skärplansmetoden Extended Cutting Plane (ECP). ECP-metoden har utvecklats främst av professor Tapio Westerlund på Anläggnings- och systemteknik vid Åbo Akademi.
Resumo:
In this article a two-dimensional transient boundary element formulation based on the mass matrix approach is discussed. The implicit formulation of the method to deal with elastoplastic analysis is considered, as well as the way to deal with viscous damping effects. The time integration processes are based on the Newmark rhoand Houbolt methods, while the domain integrals for mass, elastoplastic and damping effects are carried out by the well known cell approximation technique. The boundary element algebraic relations are also coupled with finite element frame relations to solve stiffened domains. Some examples to illustrate the accuracy and efficiency of the proposed formulation are also presented.
Resumo:
A theory for the description of turbulent boundary layer flows over surfaces with a sudden change in roughness is considered. The theory resorts to the concept of displacement in origin to specify a wall function boundary condition for a kappa-epsilon model. An approximate algebraic expression for the displacement in origin is obtained from the experimental data by using the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963). This expression is subsequently included in the near wall logarithmic velocity profile, which is then adopted as a boundary condition for a kappa-epsilon modelling of the external flow. The results are compared with the lower atmospheric observations made by Bradley(Q. J. Roy. Meteo. Soc., vol. 94, pp. 361-379, 1968) as well as with velocity profiles extracted from a set of wind tunnel experiments carried out by Avelino et al.( 7th ENCIT, 1998). The measurements are found to be in good agreement with the theoretical computations. The skin-friction coefficient was calculated according to the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963) and to a balance of the integral momentum equation. In particular, the growth of the internal boundary layer thickness obtained from the numerical simulation is compared with predictions of the experimental data calculated by two methods, the "knee" point method and the "merge" point method.
Resumo:
A non isotropic turbulence model is extended and applied to three dimensional stably stratified flows and dispersion calculations. The model is derived from the algebraic stress model (including wall proximity effects), but it retains the simplicity of the "eddy viscosity" concept of first order models. The "modified k-epsilon" is implemented in a three dimensional numerical code. Once the flow is resolved, the predicted velocity and turbulence fields are interpolated into a second grid and used to solve the concentration equation. To evaluate the model, various steady state numerical solutions are compared with small scale dispersion experiments which were conducted at the wind tunnel of Mitsubishi Heavy Industries, in Japan. Stably stratified flows and plume dispersion over three distinct idealized complex topographies (flat and hilly terrain) are studied. Vertical profiles of velocity and pollutant concentration are shown and discussed. Also, comparisons are made against the results obtained with the standard k-epsilon model.
Resumo:
One of the main complexities in the simulation of the nonlinear dynamics of rigid bodies consists in describing properly the finite rotations that they may undergo. It is well known that, to avoid singularities in the representation of the SO(3) rotation group, at least four parameters must be used. However, it is computationally expensive to use a four-parameters representation since, as only three of the parameters are independent, one needs to introduce constraint equations in the model, leading to differential-algebraic equations instead of ordinary differential ones. Three-parameter representations are numerically more efficient. Therefore, the objective of this paper is to evaluate numerically the influence of the parametrization and its singularities on the simulation of the dynamics of a rigid body. This is done through the analysis of a heavy top with a fixed point, using two three-parameter systems, Euler's angles and rotation vector. Theoretical results were used to guide the numerical simulation and to assure that all possible cases were analyzed. The two parametrizations were compared using several integrators. The results show that Euler's angles lead to faster integration compared to the rotation vector. An Euler's angles singular case, where representation approaches a theoretical singular point, was analyzed in detail. It is shown that on the contrary of what may be expected, 1) the numerical integration is very efficient, even more than for any other case, and 2) in spite of the uncertainty on the Euler's angles themselves, the body motion is well represented.
Resumo:
This paper gives a detailed presentation of the Substitution-Newton-Raphson method, suitable for large sparse non-linear systems. It combines the Successive Substitution method and the Newton-Raphson method in such way as to take the best advantages of both, keeping the convergence features of the Newton-Raphson with the low requirements of memory and time of the Successive Substitution schemes. The large system is solved employing few effective variables, using the greatest possible part of the model equations in substitution fashion to fix the remaining variables, but maintaining the convergence characteristics of the Newton-Raphson. The methodology is exemplified through a simple algebraic system, and applied to a simple thermodynamic, mechanical and heat transfer modeling of a single-stage vapor compression refrigeration system. Three distinct approaches for reproducing the thermodynamic properties of the refrigerant R-134a are compared: the linear interpolation from tabulated data, the use of polynomial fitted curves and the use of functions derived from the Helmholtz free energy.
Resumo:
In this paper, the optimum design of 3R manipulators is formulated and solved by using an algebraic formulation of workspace boundary. A manipulator design can be approached as a problem of optimization, in which the objective functions are the size of the manipulator and workspace volume; and the constrains can be given as a prescribed workspace volume. The numerical solution of the optimization problem is investigated by using two different numerical techniques, namely, sequential quadratic programming and simulated annealing. Numerical examples illustrate a design procedure and show the efficiency of the proposed algorithms.
Resumo:
The assembly and maintenance of the International Thermonuclear Experimental Reactor (ITER) vacuum vessel (VV) is highly challenging since the tasks performed by the robot involve welding, material handling, and machine cutting from inside the VV. The VV is made of stainless steel, which has poor machinability and tends to work harden very rapidly, and all the machining operations need to be carried out from inside of the ITER VV. A general industrial robot cannot be used due to its poor stiffness in the heavy duty machining process, and this will cause many problems, such as poor surface quality, tool damage, low accuracy. Therefore, one of the most suitable options should be a light weight mobile robot which is able to move around inside of the VV and perform different machining tasks by replacing different cutting tools. Reducing the mass of the robot manipulators offers many advantages: reduced material costs, reduced power consumption, the possibility of using smaller actuators, and a higher payload-to-robot weight ratio. Offsetting these advantages, the lighter weight robot is more flexible, which makes it more difficult to control. To achieve good machining surface quality, the tracking of the end effector must be accurate, and an accurate model for a more flexible robot must be constructed. This thesis studies the dynamics and control of a 10 degree-of-freedom (DOF) redundant hybrid robot (4-DOF serial mechanism and 6-DOF 6-UPS hexapod parallel mechanisms) hydraulically driven with flexible rods under the influence of machining forces. Firstly, the flexibility of the bodies is described using the floating frame of reference method (FFRF). A finite element model (FEM) provided the Craig-Bampton (CB) modes needed for the FFRF. A dynamic model of the system of six closed loop mechanisms was assembled using the constrained Lagrange equations and the Lagrange multiplier method. Subsequently, the reaction forces between the parallel and serial parts were used to study the dynamics of the serial robot. A PID control based on position predictions was implemented independently to control the hydraulic cylinders of the robot. Secondly, in machining, to achieve greater end effector trajectory tracking accuracy for surface quality, a robust control of the actuators for the flexible link has to be deduced. This thesis investigates the intelligent control of a hydraulically driven parallel robot part based on the dynamic model and two schemes of intelligent control for a hydraulically driven parallel mechanism based on the dynamic model: (1) a fuzzy-PID self-tuning controller composed of the conventional PID control and with fuzzy logic, and (2) adaptive neuro-fuzzy inference system-PID (ANFIS-PID) self-tuning of the gains of the PID controller, which are implemented independently to control each hydraulic cylinder of the parallel mechanism based on rod length predictions. The serial component of the hybrid robot can be analyzed using the equilibrium of reaction forces at the universal joint connections of the hexa-element. To achieve precise positional control of the end effector for maximum precision machining, the hydraulic cylinder should be controlled to hold the hexa-element. Thirdly, a finite element approach of multibody systems using the Special Euclidean group SE(3) framework is presented for a parallel mechanism with flexible piston rods under the influence of machining forces. The flexibility of the bodies is described using the nonlinear interpolation method with an exponential map. The equations of motion take the form of a differential algebraic equation on a Lie group, which is solved using a Lie group time integration scheme. The method relies on the local description of motions, so that it provides a singularity-free formulation, and no parameterization of the nodal variables needs to be introduced. The flexible slider constraint is formulated using a Lie group and used for modeling a flexible rod sliding inside a cylinder. The dynamic model of the system of six closed loop mechanisms was assembled using Hamilton’s principle and the Lagrange multiplier method. A linearized hydraulic control system based on rod length predictions was implemented independently to control the hydraulic cylinders. Consequently, the results of the simulations demonstrating the behavior of the robot machine are presented for each case study. In conclusion, this thesis studies the dynamic analysis of a special hybrid (serialparallel) robot for the above-mentioned special task involving the ITER and investigates different control algorithms that can significantly improve machining performance. These analyses and results provide valuable insight into the design and control of the parallel robot with flexible rods.
Resumo:
A subshift is a set of in nite one- or two-way sequences over a xed nite set, de ned by a set of forbidden patterns. In this thesis, we study subshifts in the topological setting, where the natural morphisms between them are ones de ned by a (spatially uniform) local rule. Endomorphisms of subshifts are called cellular automata, and we call the set of cellular automata on a subshift its endomorphism monoid. It is known that the set of all sequences (the full shift) allows cellular automata with complex dynamical and computational properties. We are interested in subshifts that do not support such cellular automata. In particular, we study countable subshifts, minimal subshifts and subshifts with additional universal algebraic structure that cellular automata need to respect, and investigate certain criteria of `simplicity' of the endomorphism monoid, for each of them. In the case of countable subshifts, we concentrate on countable so c shifts, that is, countable subshifts de ned by a nite state automaton. We develop some general tools for studying cellular automata on such subshifts, and show that nilpotency and periodicity of cellular automata are decidable properties, and positive expansivity is impossible. Nevertheless, we also prove various undecidability results, by simulating counter machines with cellular automata. We prove that minimal subshifts generated by primitive Pisot substitutions only support virtually cyclic automorphism groups, and give an example of a Toeplitz subshift whose automorphism group is not nitely generated. In the algebraic setting, we study the centralizers of CA, and group and lattice homomorphic CA. In particular, we obtain results about centralizers of symbol permutations and bipermutive CA, and their connections with group structures.
Resumo:
Ribonucleic acid (RNA) has many biological roles in cells: it takes part in coding, decoding, regulating and expressing of the genes as well as has the capacity to work as a catalyst in numerous biological reactions. These qualities make RNA an interesting object of various studies. Development of useful tools with which to investigate RNA is a prerequisite for more advanced research in the field. One of such tools may be the artificial ribonucleases, which are oligonucleotide conjugates that sequence-selectively cleave complementary RNA targets. This thesis is aimed at developing new efficient metal-ion-based artificial ribonucleases. On one hand, to solve the challenges related to solid-supported synthesis of metal-ion-binding conjugates of oligonucleotides, and on the other hand, to quantify their ability to cleave various oligoribonucleotide targets in a pre-designed sequence selective manner. In this study several artificial ribonucleases based on cleaving capability of metal ion chelated azacrown moiety were designed and synthesized successfully. The most efficient ribonucleases were the ones with two azacrowns close to the 3´- end of the oligonucleotide strand. Different transition metal ions were introduced into the azacrown moiety and among them, the Zn2+ ion was found to be better than Cu2+ and Ni2+ ions.
Resumo:
Kirjallisuusarvostelu
Resumo:
Single-photon emission computed tomography (SPECT) is a non-invasive imaging technique, which provides information reporting the functional states of tissues. SPECT imaging has been used as a diagnostic tool in several human disorders and can be used in animal models of diseases for physiopathological, genomic and drug discovery studies. However, most of the experimental models used in research involve rodents, which are at least one order of magnitude smaller in linear dimensions than man. Consequently, images of targets obtained with conventional gamma-cameras and collimators have poor spatial resolution and statistical quality. We review the methodological approaches developed in recent years in order to obtain images of small targets with good spatial resolution and sensitivity. Multipinhole, coded mask- and slit-based collimators are presented as alternative approaches to improve image quality. In combination with appropriate decoding algorithms, these collimators permit a significant reduction of the time needed to register the projections used to make 3-D representations of the volumetric distribution of target’s radiotracers. Simultaneously, they can be used to minimize artifacts and blurring arising when single pinhole collimators are used. Representation images are presented, which illustrate the use of these collimators. We also comment on the use of coded masks to attain tomographic resolution with a single projection, as discussed by some investigators since their introduction to obtain near-field images. We conclude this review by showing that the use of appropriate hardware and software tools adapted to conventional gamma-cameras can be of great help in obtaining relevant functional information in experiments using small animals.
Resumo:
X-ray computed log tomography has always been applied for qualitative reconstructions. In most cases, a series of consecutive slices of the timber are scanned to estimate the 3D image reconstruction of the entire log. However, the unexpected movement of the timber under study influences the quality of image reconstruction since the position and orientation of some scanned slices can be incorrectly estimated. In addition, the reconstruction time remains a significant challenge for practical applications. The present study investigates the possibility to employ modern physics engines for the problem of estimating the position of a moving rigid body and its scanned slices which are subject to X-ray computed tomography. The current work includes implementations of the extended Kalman filter and an algebraic reconstruction method for fan-bean computer tomography. In addition, modern techniques such as NVidia PhysX and CUDA are used in current study. As the result, it is numerically shown that it is possible to apply the extended Kalman filter together with a real-time physics engine, known as PhysX, in order to determine the position of a moving object. It is shown that the position of the rigid body can be determined based only on reconstructions of its slices. However, the simulation of the body movement sometimes is subject to an error during Kalman filter employment as PhysX is not always able to continue simulating the movement properly because of incorrect state estimation.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.