943 resultados para high-order peaks
Resumo:
Our objective was to characterize the modulation of the activity of Saccharomyces cerevisiae alkaline phosphatases (ALPs) by classic inhibitors of ALP activity, cholesterol and steroid hormones, in order to identify catalytic similarities between yeast and mammalian ALPs. S. cerevisiae expresses two ALPs, coded for by the PHO8 and PHO13 genes. The product of the PHO8 gene is repressible by Pi in the medium. ALP activity from yeast (grown in low or high phosphate medium) homogenates was determined with p-nitrophenylphosphate as substrate, pH 10.4 (lPiALP or hPiALP, respectively). Activation of hPiALP was observed with 5 mM L-amino acids (L-homoarginine _ 186%, L-leucine _ 155% and L-phenylalanine - 168%) and with 1 mM levamisole (122%; percentage values, in comparison to control, of recovered activity). EDTA (5 mM) and vanadate (1 mM) distinctly inhibited hPiALP (2 and 20%, respectively). L-homoarginine (5 mM) had a lower activating effect on lPiALP (166%) and was the strongest hPiALP activator. Corticosterone (5 mM) inhibited hPiALP to 90%, but no effect was observed in low phosphate medium. Cholesterol, ß-estradiol and progesterone also had different effects on lPiALP and hPiALP. A concentration-dependent activation of lPiALP minus hPiALP was evident with all three compounds, most especially with ß-estradiol and cholesterol. These results do not allow us to identify similarities of the behavior of S. cerevisiae ALPs and any of the mammalian ALPs but allow us to raise the hypothesis of differential regulation of S. cerevisiae ALPs by L-homoarginine, ß-estradiol and cholesterol and of using these compounds to discriminate between S. cerevisiae lPiALP and hPiALP.
Resumo:
Single-photon emission computed tomography (SPECT) is a non-invasive imaging technique, which provides information reporting the functional states of tissues. SPECT imaging has been used as a diagnostic tool in several human disorders and can be used in animal models of diseases for physiopathological, genomic and drug discovery studies. However, most of the experimental models used in research involve rodents, which are at least one order of magnitude smaller in linear dimensions than man. Consequently, images of targets obtained with conventional gamma-cameras and collimators have poor spatial resolution and statistical quality. We review the methodological approaches developed in recent years in order to obtain images of small targets with good spatial resolution and sensitivity. Multipinhole, coded mask- and slit-based collimators are presented as alternative approaches to improve image quality. In combination with appropriate decoding algorithms, these collimators permit a significant reduction of the time needed to register the projections used to make 3-D representations of the volumetric distribution of target’s radiotracers. Simultaneously, they can be used to minimize artifacts and blurring arising when single pinhole collimators are used. Representation images are presented, which illustrate the use of these collimators. We also comment on the use of coded masks to attain tomographic resolution with a single projection, as discussed by some investigators since their introduction to obtain near-field images. We conclude this review by showing that the use of appropriate hardware and software tools adapted to conventional gamma-cameras can be of great help in obtaining relevant functional information in experiments using small animals.
Resumo:
Recent Storms in Nordic countries were a reason of long power outages in huge territories. After these disasters distribution networks' operators faced with a problem how to provide adequate quality of supply in such situation. The decision of utilization cable lines rather than overhead lines were made, which brings new features to distribution networks. The main idea of this work is a complex analysis of medium voltage distribution networks with long cable lines. High value of cable’s specific capacitance and length of lines determine such problems as: high values of earth fault currents, excessive amount of reactive power flow from distribution to transmission network, possibility of a high voltage level at the receiving end of cable feeders. However the core tasks was to estimate functional ability of the earth fault protection and the possibility to utilize simplified formulas for operating setting calculations in this network. In order to provide justify solution or evaluation of mentioned above problems corresponding calculations were made and in order to analyze behavior of relay protection principles PSCAD model of the examined network have been created. Evaluation of the voltage rise in the end of a cable line have educed absence of a dangerous increase in a voltage level, while excessive value of reactive power can be a reason of final penalty according to the Finish regulations. It was proved and calculated that for this networks compensation of earth fault currents should be implemented. In PSCAD models of the electrical grid with isolated neutral, central compensation and hybrid compensation were created. For the network with hybrid compensation methodology which allows to select number and rated power of distributed arc suppression coils have been offered. Based on the obtained results from experiments it was determined that in order to guarantee selective and reliable operation of the relay protection should be utilized hybrid compensation with connection of high-ohmic resistor. Directional and admittance based relay protection were tested under these conditions and advantageous of the novel protection were revealed. However, for electrical grids with extensive cabling necessity of a complex approach to the relay protection were explained and illustrated. Thus, in order to organize reliable earth fault protection is recommended to utilize both intermittent and conventional relay protection with operational settings calculated by the use of simplified formulas.
Resumo:
In the Finnish university society the commercialization research projects has not been a focus of interest until now. The reasons for the growing interest towards commercialization research projects are their possibility to develop our economy simultaneously by providing new technologies and products. This study focuses on the examination of what kind of high-technology oriented research can be commercialized and how. The aim is to generate understanding of how commercialization research projects should proceed and to find concrete ways of improving the of commercialization research projects. As its research method, the study analyzes four different university high-technology research projects which have been commercially oriented and have to some degree been able to commercialize the product or technology developed during the research phase. The data has been gathered mainly by semi-structured interviews of people involved in these particular projects or cases. The findings from the interviews have been reflected to the final reports of the projects, provided by TEKES, and later on the data gained has been compared to each other. Also a literature review has been produced about the subject of commercializing university research with the purpose of providing known theories and framework connected with the subject. The study reveals five main factors related to commercializing high-tech research. These factors are: The Team, Market potential and competitiveness, Product and technology, Funding and Steering Group. Also the uncertainties related to these factors have been addressed. As a conclusion the study provides the main aspects that should be considered when starting a commercialization research project. Also a combining hierarchical framework has been provided related to the five factors presented. In Chapter 5 the study addresses the main tasks or steps to be taken in order to get public funding for a commercially oriented research project and later on the actual steps to be executed in order to successfully commercialize these high-tech research projects.
Resumo:
Identification of functional properties of wheat flour by specific tests allows genotypes with appropriate characteristics to be selected for specific industrial uses. The objective of wheat breeding programs is to improve the quality of germplasm bank in order to be able to develop wheat with suitable gluten strength and extensibility for bread making. The aim of this study was to evaluate 16 wheat genotypes by correlating both glutenin subunits of high and low molecular weight and gliadin subunits with the physicochemical characteristics of the grain. Protein content, sedimentation volume, sedimentation index, and falling number values were analyzed after the grains were milled. Hectoliter weight and mass of 1000 seeds were also determined. The glutenin and gliadin subunits were separated using polyacrylamide gel in the presence of sodium dodecyl sulfate. The data were evaluated using variance analysis, Pearson's correlation, principal component analysis, and cluster analysis. The IPR 85, IPR Catuara TM, T 091015, and T 091069 genotypes stood out from the others, which indicate their possibly superior grain quality with higher sedimentation volume, higher sedimentation index, and higher mass of 1000 seeds; these genotypes possessed the subunits 1 (Glu-A1), 5 + 10 (Glu-D1), c (Glu-A3), and b (Glu-B3), with exception of T 091069 genotype that possessed the g allele instead of b in the Glu-B3.
Resumo:
PhotoAcoustic Imaging (PAI) is a branch in clinical and pre-clinical imaging, that refers to the techniques mapping acoustic signals caused by the absorption of the short laser pulse. This conversion of electromagnetic energy of the light to the mechanical (acoustic) energy is usually called photoacoustic effect. PAI, by combining optical excitation with acoustical detection, is able to preserve the diffraction limited spatial resolution. At the same time, the penetration depth is extended beyond the diffusive limit. The Laser-Scanning PhotoAcoustic Microscope system (LS-PAM) has been developed, that offers the axial resolution of 7.75 µm with the lateral resolution better than 10 µm. The first in vivo imaging experiments were carried out. Thus, in vivo label-free imaging of the mouse ear was performed. The principle possibility to image vessels located in deep layers of the mouse skin was shown. As well as that, a gold printing sample, vasculature of the Chick Chorioallantoic Membrane Assay, Drosophila larvae were imaged by PAI. During the experimental work, a totally new application of PAM was found, in which the acoustic waves, generated by incident light can be used for further imaging of another sample. In order to enhance the performance of the presented system two main recommendation can be offered. First, the current system should be transformed into reflection-mode setup system. Second, a more powerful source of light with the sufficient repetition rate should be introduced into the system.
Resumo:
Strenx® 960 MC is a direct quenched type of Ultra High Strength Steel (UHSS) with low carbon content. Although this material combines high strength and good ductility, it is highly sensitive towards fabrication processes. The presence of stress concentration due to structural discontinuity or notch will highlight the role of these fabrication effects on the deformation capacity of the material. Due to this, a series of tensile tests are done on both pure base material (BM) and when it has been subjected to Heat Input (HI) and Cold Forming (CF). The surface of the material was dressed by laser beam with a certain speed to study the effect of HI while the CF is done by bending the specimen to a certain angle prior to tensile test. The generated results illustrate the impact of these processes on the deformation capacity of the material, specially, when the material has HI experience due to welding or similar processes. In order to compare the results with those of numerical simulation, LS-DYNA explicit commercial package has been utilized. The generated results show an acceptable agreement between experimental and numerical simulation outcomes.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
Global energy consumption has been increasing yearly and a big portion of it is used in rotating electrical machineries. It is clear that in these machines energy should be used efficiently. In this dissertation the aim is to improve the design process of high-speed electrical machines especially from the mechanical engineering perspective in order to achieve more reliable and efficient machines. The design process of high-speed machines is challenging due to high demands and several interactions between different engineering disciplines such as mechanical, electrical and energy engineering. A multidisciplinary design flow chart for a specific type of high-speed machine in which computer simulation is utilized is proposed. In addition to utilizing simulation parallel with the design process, two simulation studies are presented. The first is used to find the limits of two ball bearing models. The second is used to study the improvement of machine load capacity in a compressor application to exceed the limits of current machinery. The proposed flow chart and simulation studies show clearly that improvements in the high-speed machinery design process can be achieved. Engineers designing in high-speed machines can utilize the flow chart and simulation results as a guideline during the design phase to achieve more reliable and efficient machines that use energy efficiently in required different operation conditions.
Resumo:
In this study, finite element analyses and experimental tests are carried out in order to investigate the effect of loading type and symmetry on the fatigue strength of three different non-load carrying welded joints. The current codes and recommendations do not give explicit instructions how to consider degree of bending in loading and the effect of symmetry in the fatigue assessment of welded joints. The fatigue assessment is done by using effective notch stress method and linear elastic fracture mechanics. Transverse attachment and cover plate joints are analyzed by using 2D plane strain element models in FEMAP/NxNastran and Franc2D software and longitudinal gusset case is analyzed by using solid element models in Abaqus and Abaqus/XFEM software. By means of the evaluated effective notch stress range and stress intensity factor range, the nominal fatigue strength is assessed. Experimental tests consist of the fatigue tests of transverse attachment joints with total amount of 12 specimens. In the tests, the effect of both loading type and symmetry on the fatigue strength is studied. Finite element analyses showed that the fatigue strength of asymmetric joint is higher in tensile loading and the fatigue strength of symmetric joint is higher in bending loading in terms of nominal and hot spot stress methods. Linear elastic fracture mechanics indicated that bending reduces stress intensity factors when the crack size is relatively large since the normal stress decreases at the crack tip due to the stress gradient. Under tensile loading, experimental tests corresponded with finite element analyzes. Still, the fatigue tested joints subjected to bending showed the bending increased the fatigue strength of non-load carrying welded joints and the fatigue test results did not fully agree with the fatigue assessment. According to the results, it can be concluded that in tensile loading, the symmetry of joint distinctly affects on the fatigue strength. The fatigue life assessment of bending loaded joints is challenging since it depends on whether the crack initiation or propagation is predominant.
Resumo:
The increasing emphasis on energy efficiency is starting to yield results in the reduction in greenhouse gas emissions; however, the effort is still far from sufficient. Therefore, new technical solutions that will enhance the efficiency of power generation systems are required to maintain the sustainable growth rate, without spoiling the environment. A reduction in greenhouse gas emissions is only possible with new low-carbon technologies, which enable high efficiencies. The role of the rotating electrical machine development is significant in the reduction of global emissions. A high proportion of the produced and consumed electrical energy is related to electrical machines. One of the technical solutions that enables high system efficiency on both the energy production and consumption sides is high-speed electrical machines. This type of electrical machines has a high system overall efficiency, a small footprint, and a high power density compared with conventional machines. Therefore, high-speed electrical machines are favoured by the manufacturers producing, for example, microturbines, compressors, gas compression applications, and air blowers. High-speed machine technology is challenging from the design point of view, and a lot of research is in progress both in academia and industry regarding the solution development. The solid technical basis is of importance in order to make an impact in the industry considering the climate change. This work describes the multidisciplinary design principles and material development in high-speed electrical machines. First, high-speed permanent magnet synchronous machines with six slots, two poles, and tooth-coil windings are discussed in this doctoral dissertation. These machines have unique features, which help in solving rotordynamic problems and reducing the manufacturing costs. Second, the materials for the high-speed machines are discussed in this work. The materials are among the key limiting factors in electrical machines, and to overcome this limit, an in-depth analysis of the material properties and behavior is required. Moreover, high-speed machines are sometimes operating in a harsh environment because they need to be as close as possible to the rotating tool and fully exploit their advantages. This sets extra requirements for the materials applied.
Resumo:
In this work, the magnetic field penetration depth for high-Tc cuprate superconductors is calculated using a recent Interlayer Pair Tunneling (ILPT) model proposed by Chakravarty, Sudb0, Anderson, and Strong [1] to explain high temperature superconductivity. This model involves a "hopping" of Cooper pairs between layers of the unit cell which acts to amplify the pairing mechanism within the planes themselves. Recent work has shown that this model can account reasonably well for the isotope effect and the dependence of Tc on nonmagnetic in-plane impurities [2] , as well as the Knight shift curves [3] and the presence of a magnetic peak in the neutron scattering intensity [4]. In the latter case, Yin et al. emphasize that the pair tunneling must be the dominant pairing mechanism in the high-Tc cuprates in order to capture the features found in experiments. The goal of this work is to determine whether or not the ILPT model can account for the experimental observations of the magnetic field penetration depth in YBa2Cu307_a7. Calculations are performed in the weak and strong coupling limits, and the efi"ects of both small and large strengths of interlayer pair tunneling are investigated. Furthermore, as a follow up to the penetration depth calculations, both the neutron scattering intensity and the Knight shift are calculated within the ILPT formalism. The aim is to determine if the ILPT model can yield results consistent with experiments performed for these properties. The results for all three thermodynamic properties considered are not consistent with the notion that the interlayer pair tunneling must be the dominate pairing mechanism in these high-Tc cuprate superconductors. Instead, it is found that reasonable agreement with experiments is obtained for small strengths of pair tunneling, and that large pair tunneling yields results which do not resemble those of the experiments.
Resumo:
A general derivation of the anharmonic coefficients for a periodic lattice invoking the special case of the central force interaction is presented. All of the contributions to mean square displacement (MSD) to order 14 perturbation theory are enumerated. A direct correspondance is found between the high temperature limit MSD and high temperature limit free energy contributions up to and including 0(14). This correspondance follows from the detailed derivation of some of the contributions to MSD. Numerical results are obtained for all the MSD contributions to 0(14) using the Lennard-Jones potential for the lattice constants and temperatures for which the Monte Carlo results were calculated by Heiser, Shukla and Cowley. The Peierls approximation is also employed in order to simplify the numerical evaluation of the MSD contributions. The numerical results indicate the convergence of the perturbation expansion up to 75% of the melting temperature of the solid (TM) for the exact calculation; however, a better agreement with the Monte Carlo results is not obtained when the total of all 14 contributions is added to the 12 perturbation theory results. Using Peierls approximation the expansion converges up to 45% of TM• The MSD contributions arising in the Green's function method of Shukla and Hubschle are derived and enumerated up to and including 0(18). The total MSD from these selected contributions is in excellent agreement with their results at all temperatures. Theoretical values of the recoilless fraction for krypton are calculated from the MSD contributions for both the Lennard-Jones and Aziz potentials. The agreement with experimental values is quite good.
Resumo:
Exchange reactions between molecular complexes and excess acid
or base are well known and have been extensively surveyed in the
literature(l). Since the exchange mechanism will, in some way
involve the breaking of the labile donor-acceptor bond, it follows
that a discussion of the factors relating to bonding in molecular complexes
will be relevant.
In general, a strong Lewis base and a strong Lewis acid form a
stable adduct provided that certain stereochemical requirements are
met.
A strong Lewis base has the following characteristics (1),(2)
(i) high electron density at the donor site.
(ii) a non-bonded electron pair which has a low ionization potential
(iii) electron donating substituents at the donor atom site.
(iv) facile approach of the site of the Lewis base to the
acceptor site as dictated by the steric hindrance of the
substituents.
Examples of typical Lewis bases are ethers, nitriles, ketones,
alcohols, amines and phosphines.
For a strong Lewis acid, the following properties are important:(
i) low electron density at the acceptor site.
(ii) electron withdrawing substituents. (iii) substituents which do not interfere with the close
approach of the Lewis base.
(iv) availability of a vacant orbital capable of accepting
the lone electron pair of the donor atom.
Examples of Lewis acids are the group III and IV halides such
(M=B, AI, Ga, In) and MX4 - (M=Si, Ge, Sn, Pb).
The relative bond strengths of molecular complexes have been
investigated by:-
(i)
(ii)
(iii)
(iv)
(v]
(vi)
dipole moment measurements (3).
shifts of the carbonyl peaks in the IIIR. (4) ,(5), (6) ..
NMR chemical shift data (4),(7),(8),(9).
D.V. and visible spectrophotometric shifts (10),(11).
equilibrium constant data (12), (13).
heats of dissociation and heats of reactions (l~),
(16), (17), (18), (19).
Many experiments have bben carried out on boron trihalides in
order to determine their relative acid strengths. Using pyridine,
nitrobenzene, acetonitrile and trimethylamine as reference Lewis
bases, it was found that the acid strength varied in order:RBx3 >
BC1
3 >BF 3
• For the acetonitrile-boron trihalide and trimethylamine
boron trihalide complexes in nitrobenzene, an-NMR study (7) showed
that the shift to lower field was. greatest for the BB~3 adduct ~n~
smallest for the BF 3 which is in agreement with the acid strengths. If electronegativities of the substituents were the only
important effect, and since c~ Br ,one would expect
the electron density at the boron nucleus to vary as BF3
Resumo:
Although it is widely assumed that temperature affects pollutant toxicity, few studies have actually investigated this relationship. Moreover, such research as has been done has involved constant temperatures; circumstances which are rarely, if ever, actually experienced by north temperate, littoral zone cyprinid species. To investigate the effects of temperature regime on nickel toxicity in goldfish (Carassius auratus L.), 96- and 240-h LCSO values for the heavy metal pollutant, nickel (NiCI2.6H20), were initially determined at 2DoC (22.8 mg/L and 14.7 mg/L in artificially softened water). Constant temperature bioassays at 10°C, 20°C and 30°C were conducted at each of 0, 240-h and 96-h LCSO nickel concentrations for 240 hours. In order to determine the effects of temperature variation during nickel exposure it was imperative that the effects of a single temperature change be investigated before addressing more complex regimes. Single temperature changes of + 10°C or -10°C were imposed at rates of 2°C/h following exposures of between 24 hand 216 h. The effects of a single temperature change on mortality, and duration of toxicant exposure at high and low temperatures were evaluated. The effects of fluctuating temperatures during exposure were investigated through two regimes. The first set of bioassays imposed a sinewave diurnal cycle temperature (20.±.1DOC) throughout the 10 day exposure to 240-h LeSO Ni. The second set of investigations approximated cyprinid movement through the littoral zone by imposing directionally random temperature changes (±2°C at 2-h intervals), between extremes of 10° and 30°C, at 240-h LC50 Ni. Body size (i.e., total length, fork length, and weight) and exposure time were recorded for all fish mortalities. Cumulative mortality curves under constant temperature regimes indicated significantly higher mortality as temperature and nickel concentration were increased. At 1DOC no significant differences in mortality curves were evident in relation to low and high nickel test concentrations (Le., 16 mg/L and 20 mg/L). However at 20°C and 30°C significantly higher mortality was experienced in animals exposed to 20 mg/L Ni. Mortality at constant 10°C was significantly lower than at 30°C with 16 mg/L and was significantly loWer than each of 2DoC and 39°C tanks at 20 mg/L Ni exposure. A single temperature shift from 20°C to 1DoC resulted in a significant decrease in mortality rate and conversely, a single temperature shift from 20°C to 30°C resulted in a significant increase in mortality rate. Rates of mortality recorded during these single temperature shift assays were significantly different from mortality rates obtained under constant temperature assay conditions. Increased Ni exposure duration at higher temperatures resulted in highest mortality. Diurnally cycling temperature bioassays produced cumulative mortality curves approximating constant 20°C curves, with increased mortality evident after peaks in the temperature cycle. Randomly fluctuating temperature regime mortality curves also resembled constant 20°C tanks with mortalities after high temperature exposures (25°C - 30°C). Some test animals survived in all assays with the exception of the 30°C assays, with highest survival associated with low temperature and low Ni concentration. Post-exposure mortality occurred most frequently in individuals which had experienced high Ni concentrations and high temperatures during assays. Additional temperature stress imposed 2 - 12 weeks post exposure resulted in a single death out of 116 individuals suggesting that survivors are capable of surviving subsequent temperature stresses. These investigations suggest that temperature significantly and markedly affects acute nickel toxicity under both constant and fluctuating temperature regimes and plays a role in post exposure mortality and subsequent stress response.