873 resultados para Rutherford backscattering in channeling geometry
Resumo:
Discrepancy was found between enhanced hypotension and attenuated relaxation of conduit arteries in response to acetylcholine (ACh) and bradykinin (BK) in nitric oxide (NO)-deficient hypertension. The question is whether a similar phenomenon occurs in spontaneously hypertensive rats (SHR) with a different pathogenesis. Wistar rats, SHR, and SHR treated with NO donors [molsidomine (50 mg/kg) or pentaerythritol tetranitrate (100 mg/kg), twice a day, by gavage] were studied. After 6 weeks of treatment systolic blood pressure (BP) was increased significantly in experimental groups. Under anesthesia, the carotid artery was cannulated for BP recording and the jugular vein for drug administration. The iliac artery was used for in vitro studies and determination of geometry. Compared to control, SHR showed a significantly enhanced (P < 0.01) hypotensive response to ACh (1 and 10 µg, 87.9 ± 6.9 and 108.1 ± 5.1 vs 35.9 ± 4.7 and 64.0 ± 3.3 mmHg), and BK (100 µg, 106.7 ± 8.3 vs 53.3 ± 5.2 mmHg). SHR receiving NO donors yielded similar results. In contrast, maximum relaxation of the iliac artery in response to ACh was attenuated in SHR (12.1 ± 3.6 vs 74.2 ± 8.6% in controls, P < 0.01). Iliac artery inner diameter also increased (680 ± 46 vs 828 ± 28 µm in controls, P < 0.01). Wall thickness, wall cross-section area, wall thickness/inner diameter ratio increased significantly (P < 0.01). No differences were found in this respect among SHR and SHR treated with NO donors. These findings demonstrated enhanced hypotension and attenuated relaxation of the conduit artery in response to NO activators in SHR and in SHR treated with NO donors, a response similar to that found in NO-deficient hypertension.
Resumo:
Permanent magnet synchronous machines with fractional-slot non-overlapping windings (FSPMSM), also known as tooth-coil winding permanent magnet synchronous machines (TCW PMSM), have been under intensive research during the latest decade. There are many optimization routines explained and implemented in the literature in order to improve the characteristics of this machine type. This paper introduces a new technique for torque ripple minimization in TCW PMSM. The source of torque harmonics is also described. The low order torque harmonics can be harmful for a variety of applications, such as direct drive wind generators, direct drive light vehicle electrical motors, and for some high precision servo applications. The reduction of the torque ripple harmonics with the lowest orders (6th and 12th) is realized by machine geometry optimization technique using finite element analysis (FEA). The presented optimization technique includes the stator geometry adjustment in TCW PMSMs with rotor surface permanent magnets and with rotor embedded permanent magnets. Influence of the permanent magnet skewing on the torque ripple reduction and cogging torque elimination was also investigated. It was implemented separately and together with the stator optimization technique. As a result, the reduction of some torque ripple harmonics was attained.
Resumo:
In this thesis the effect of focal point parameters in fiber laser welding of structural steel is studied. The goal is to establish relations between laser power, focal point diameter and focal point position with the resulting quality, weld-bead geometry and hardness of the welds. In the laboratory experiments, AB AH36 shipbuilding steel was welded in an I-butt joint configuration using IPG YLS-10000 continuous wave fiber laser. The quality of the welds produced were evaluated based on standard SFS-EN ISO 13919-1. The weld-bead geometry was defined from the weld cross-sections and Vickers hardness test was used to measure hardness's from the middle of the cross-sections. It was shown that all the studied focal point parameters have an effect on the quality, weld-bead geometry and hardness of the welds produced.
Resumo:
The aim of this work was to evaluate the osmotic dehydration of sweet potato (Ipomoea batatas) using hypertonic sucrose solutions, with or without NaCl, at three different concentrations, at 40 °C. Highest water losses were obtained when the mixture of sucrose and NaCl was used. The addition of NaCl to osmotic solutions increases the driving force of the process and it is verified that the osmotic dehydration process is mainly influenced by changes in NaCl concentration, but the positive effect of the salt-sucrose interaction on soluble solids also determined the decrease of solid gain when solutes were at maximum concentrations. Mass transfer kinetics were modeled according to Peleg, Fick and Page's equations, which presented good fittings of the experimental data. Peleg's equation and Page's model presented the best fitting and showed excellent predictive capacity for water loss and salt gain data. The effective diffusivity determined using Fick's Second Law applied to slice geometry was found to be in the range from 3.82 x 10-11 to 7.46 x 10-11 m²/s for water loss and from 1.18 x 10-10 to 3.38 x 10-11 m²/s for solid gain.
Resumo:
Alfa Laval Aalborg Oy designs and manufactures waste heat recovery systems utilizing extended surfaces. The waste heat recovery boiler considered in this thesis is a water-tube boiler where exhaust gas is used as the convective heat transfer medium and water or steam flowing inside the tubes is subject to cross-flow. This thesis aims to contribute to the design of waste heat recovery boiler unit by developing a numerical model of the H-type finned tube bundle currently used by Alfa Laval Aalborg Oy to evaluate the gas-side heat transfer performance. The main objective is to identify weaknesses and potential areas of development in the current H-type finned tube design. In addition, numerical simulations for a total of 15 cases with varying geometric parameters are conducted to investigate the heat transfer and pressure drop performance dependent on H-type fin geometry. The investigated geometric parameters include fin width and height, fin spacing, and fin thickness. Comparison between single and double tube type configuration is also conducted. Based on the simulation results, the local heat transfer and flow behaviour of the H-type finned tube is presented including boundary layer development between the fins, the formation of recirculation zone behind the tubes, and the local variations of flow velocity and temperature within the tube bundle and on the fin surface. Moreover, an evaluation of the effects of various fin parameters on heat transfer and pressure drop performance of H-type finned tube bundle has been provided. It was concluded that from the studied parameters fin spacing and fin width had the most significant effect on tube bundle performance and the effect of fin thickness was the least important. Furthermore, the results suggested that the heat transfer performance would increase due to enhanced turbulence if the current double tube configuration is replaced with single tube configuration, but further investigation and experimental measurements are required in order to validate the results.
Resumo:
This thesis addresses the coolability of porous debris beds in the context of severe accident management of nuclear power reactors. In a hypothetical severe accident at a Nordic-type boiling water reactor, the lower drywell of the containment is flooded, for the purpose of cooling the core melt discharged from the reactor pressure vessel in a water pool. The melt is fragmented and solidified in the pool, ultimately forming a porous debris bed that generates decay heat. The properties of the bed determine the limiting value for the heat flux that can be removed from the debris to the surrounding water without the risk of re-melting. The coolability of porous debris beds has been investigated experimentally by measuring the dryout power in electrically heated test beds that have different geometries. The geometries represent the debris bed shapes that may form in an accident scenario. The focus is especially on heap-like, realistic geometries which facilitate the multi-dimensional infiltration (flooding) of coolant into the bed. Spherical and irregular particles have been used to simulate the debris. The experiments have been modeled using 2D and 3D simulation codes applicable to fluid flow and heat transfer in porous media. Based on the experimental and simulation results, an interpretation of the dryout behavior in complex debris bed geometries is presented, and the validity of the codes and models for dryout predictions is evaluated. According to the experimental and simulation results, the coolability of the debris bed depends on both the flooding mode and the height of the bed. In the experiments, it was found that multi-dimensional flooding increases the dryout heat flux and coolability in a heap-shaped debris bed by 47–58% compared to the dryout heat flux of a classical, top-flooded bed of the same height. However, heap-like beds are higher than flat, top-flooded beds, which results in the formation of larger steam flux at the top of the bed. This counteracts the effect of the multi-dimensional flooding. Based on the measured dryout heat fluxes, the maximum height of a heap-like bed can only be about 1.5 times the height of a top-flooded, cylindrical bed in order to preserve the direct benefit from the multi-dimensional flooding. In addition, studies were conducted to evaluate the hydrodynamically representative effective particle diameter, which is applied in simulation models to describe debris beds that consist of irregular particles with considerable size variation. The results suggest that the effective diameter is small, closest to the mean diameter based on the number or length of particles.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
Subshifts are sets of configurations over an infinite grid defined by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts offinite type (2D SFTs), where the underlying grid is Z2 and the set of for-bidden patterns is finite. We are mainly interested in the interplay between the computational power of 2D SFTs and their geometry, examined through the concept of expansive subdynamics. 2D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions 1 and 2. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive 2D SFTs display the totality of behaviours of general 2D SFTs. For example, we construct an aperiodic extremely expansive 2D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive 2D SFTs. We also prove that every Medvedev class contains an extremely expansive 2D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a 2D SFT. Finally, we prove that for every computable sequence of 2D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or fixed-point method for constructing 2D SFTs which has been previously used by Ga´cs, Durand, Romashchenko and Shen.
Resumo:
Various researches in the field of econophysics has shown that fluid flow have analogous phenomena in financial market behavior, the typical parallelism being delivered between energy in fluids and information on markets. However, the geometry of the manifold on which market dynamics act out their dynamics (corporate space) is not yet known. In this thesis, utilizing a Seven year time series of prices of stocks used to compute S&P500 index on the New York Stock Exchange, we have created local chart to the corporate space with the goal of finding standing waves and other soliton like patterns in the behavior of stock price deviations from the S&P500 index. By first calculating the correlation matrix of normalized stock price deviations from the S&P500 index, we have performed a local singular value decomposition over a set of four different time windows as guides to the nature of patterns that may emerge. I turns out that in almost all cases, each singular vector is essentially determined by relatively small set of companies with big positive or negative weights on that singular vector. Over particular time windows, sometimes these weights are strongly correlated with at least one industrial sector and certain sectors are more prone to fast dynamics whereas others have longer standing waves.
Resumo:
The purpose of this study is to find out how laser based Directed Energy Deposition processes can benefit from different types of monitoring. DED is a type of additive manufacturing process, where parts are manufactured in layers by using metallic powder or metallic wire. DED processes can be used to manufacture parts that are not possible to manufacture with conventional manufacturing processes, when adding new geometries to existing parts or when wanting to minimize the scrap material that would result from machining the part. The aim of this study is to find out why laser based DED-processes are monitored, how they are monitored and what devices are used for monitoring. This study has been done in the form of a literature review. During the manufacturing process, the DED-process is highly sensitive to different disturbances such as fluctuations in laser absorption, powder feed rate, temperature, humidity or the reflectivity of the melt pool. These fluctuations can cause fluctuations in the size of the melt pool or its temperature. The variations in the size of the melt pool have an effect on the thickness of individual layers, which have a direct impact on the final surface quality and dimensional accuracy of the parts. By collecting data from these fluctuations and adjusting the laser power in real-time, the size of the melt pool and its temperature can be kept within a specified range that leads to significant improvements in the manufacturing quality. The main areas of monitoring can be divided into the monitoring of the powder feed rate, the temperature of the melt pool, the height of the melt pool and the geometry of the melt pool. Monitoring the powder feed rate is important when depositing different material compositions. Monitoring the temperature of the melt pool can give information about the microstructure and mechanical properties of the part. Monitoring the height and the geometry of the melt pool is an important factor in achieving the desired dimensional accuracy of the part. By combining multiple different monitoring devices, the amount of fluctuations that can be controlled will be increased. In addition, by combining additive manufacturing with machining, the benefits of both processes could be utilized.
Resumo:
Traduction de Wylie, rédigée par Li Shan lan ; préfaces Chinoises des deux traducteurs (1859) ; préface anglaise, écrite à Shang hai par A. Wylie (juillet 1859). Liste de termes techniques en anglais et en Chinois. Gravé à la maison Mo hai (1859).18 livres.
Resumo:
This thesis explores the debate and issues regarding the status of visual ;,iferellces in the optical writings of Rene Descartes, George Berkeley and James 1. Gibson. It gathers arguments from across their works and synthesizes an account of visual depthperception that accurately reflects the larger, metaphysical implications of their philosophical theories. Chapters 1 and 2 address the Cartesian and Berkelean theories of depth-perception, respectively. For Descartes and Berkeley the debate can be put in the following way: How is it possible that we experience objects as appearing outside of us, at various distances, if objects appear inside of us, in the representations of the individual's mind? Thus, the Descartes-Berkeley component of the debate takes place exclusively within a representationalist setting. Representational theories of depthperception are rooted in the scientific discovery that objects project a merely twodimensional patchwork of forms on the retina. I call this the "flat image" problem. This poses the problem of depth in terms of a difference between two- and three-dimensional orders (i.e., a gap to be bridged by one inferential procedure or another). Chapter 3 addresses Gibson's ecological response to the debate. Gibson argues that the perceiver cannot be flattened out into a passive, two-dimensional sensory surface. Perception is possible precisely because the body and the environment already have depth. Accordingly, the problem cannot be reduced to a gap between two- and threedimensional givens, a gap crossed with a projective geometry. The crucial difference is not one of a dimensional degree. Chapter 3 explores this theme and attempts to excavate the empirical and philosophical suppositions that lead Descartes and Berkeley to their respective theories of indirect perception. Gibson argues that the notion of visual inference, which is necessary to substantiate representational theories of indirect perception, is highly problematic. To elucidate this point, the thesis steps into the representationalist tradition, in order to show that problems that arise within it demand a tum toward Gibson's information-based doctrine of ecological specificity (which is to say, the theory of direct perception). Chapter 3 concludes with a careful examination of Gibsonian affordallces as the sole objects of direct perceptual experience. The final section provides an account of affordances that locates the moving, perceiving body at the heart of the experience of depth; an experience which emerges in the dynamical structures that cross the body and the world.
Resumo:
The rate of decrease in mean sediment size and weight per square metre along a 54 km reach of the Credit River was found to depend on variations in the channel geometry. The distribution of a specific sediment size consist of: (1) a transport zone; (2) an accumulation zone; and (3) a depletion zone. These zones shift downstream in response to downcurrent decreases in stream competence. Along a .285 km man-made pond, within the Credit River study area, the sediment is also characterized by downstream shifting accumulation zones for each finer clast size. The discharge required to initiate movement of 8 cm and 6 cm blocks in Cazenovia Creek is closely approximated by Baker and Ritter's equation. Incipient motion of blocks in Twenty Mile Creek is best predicted by Yalin's relation which is more efficient in deeper flows. The transport distance of blocks in both streams depends on channel roughness and geometry. Natural abrasion and distribution of clasts may depend on the size of the surrounding sediment and variations in flow competence. The cumulative percent weight loss with distance of laboratory abraded dolostone is defined by a power function. The decrease in weight of dolostone follows a negative exponential. In the abrasion mill, chipping causes the high initial weight loss of dolostone; crushing and grinding produce most of the subsequent weight loss. Clast size was found to have little effect on the abrasion of dolostone within the diameter range considered. Increasing the speed of the mill increased the initial amount of weight loss but decreased the rate of abrasion. The abrasion mill was found to produce more weight loss than stream action. The maximum percent weight loss determined from laboratory and field abrasion data is approximately 40 percent of the weight loss observed along the Credit River. Selective sorting of sediment explains the remaining percentage, not accounted for by abrasion.
Resumo:
The work in this thesis mainly deals with l,l-enediamines and ~ -substituted enamines (push-pull olefines) and their reactions, leading to the formation of a number of heterocycles. Various ~-substituted enamines were prepared by a 'one pot synthesis' in which a l,l-enediamine presumably acts as an intermediate. These enamines, various substituted crotonamides and propenamides, were made by using two different orthoesters, various secondary and primary amines and cyanoacetamide. Their structures, mechanism of formation and geometry are discussed. A synthetic route to various unsymmetrically substituted pyridines was examined. Two substituted pyridinones were obtained by using two different ~-substituted enamines and cyanoacetamide. In one case a dihydropyridine was isolated. This dihydropyridine, on heating in acidic conditions, gave a pyridinone, which confirmed this dihydropyridine as an intermediate in this pyridine synthesis. A new synthetic method was used to make highly substituted pyridinones, which involved the reaction of l,l-enediamines with the ~-substituted enamines. A one pot synthesis and an interrupted one pot synthesis were used to make these pyridinones. Two different orthoesters and three different secondary amines were used. Serendipitous formation of a pyrimidinone was observed when pyrrolidine was used as the secondary amine and triethyl orthopropionate was used as the orthoester. In all cases cyanoacetamide was used as the carbon acid. This pyridine synthesis was designed with aI, l-enediamine as the Michael donor and the ~ -substituted enamines as Michael acceptors. Substituted ureas were obtained in two cases, which was a surprise. Some pyrimidines were made by reacting two substituted enamines with two different amidines. When benzamidine was used, the expected pyrimidines were obtained. But, when 2-benzyl-2-thiopseudourea (which is also an amidine) was used, of the two expected pyrimidines, only one was obtained. In the other case, an additional substitution reaction took place in which the S-benzyl group was lost. An approach to quinazolone and benzothiadiazine synthesis is discussed. Two compounds were made from 1, I-dimorpholinoethene
Resumo:
Optimization of wave functions in quantum Monte Carlo is a difficult task because the statistical uncertainty inherent to the technique makes the absolute determination of the global minimum difficult. To optimize these wave functions we generate a large number of possible minima using many independently generated Monte Carlo ensembles and perform a conjugate gradient optimization. Then we construct histograms of the resulting nominally optimal parameter sets and "filter" them to identify which parameter sets "go together" to generate a local minimum. We follow with correlated-sampling verification runs to find the global minimum. We illustrate this technique for variance and variational energy optimization for a variety of wave functions for small systellls. For such optimized wave functions we calculate the variational energy and variance as well as various non-differential properties. The optimizations are either on par with or superior to determinations in the literature. Furthermore, we show that this technique is sufficiently robust that for molecules one may determine the optimal geometry at tIle same time as one optimizes the variational energy.