974 resultados para High-Order Accurate Scheme
Resumo:
In the Finnish university society the commercialization research projects has not been a focus of interest until now. The reasons for the growing interest towards commercialization research projects are their possibility to develop our economy simultaneously by providing new technologies and products. This study focuses on the examination of what kind of high-technology oriented research can be commercialized and how. The aim is to generate understanding of how commercialization research projects should proceed and to find concrete ways of improving the of commercialization research projects. As its research method, the study analyzes four different university high-technology research projects which have been commercially oriented and have to some degree been able to commercialize the product or technology developed during the research phase. The data has been gathered mainly by semi-structured interviews of people involved in these particular projects or cases. The findings from the interviews have been reflected to the final reports of the projects, provided by TEKES, and later on the data gained has been compared to each other. Also a literature review has been produced about the subject of commercializing university research with the purpose of providing known theories and framework connected with the subject. The study reveals five main factors related to commercializing high-tech research. These factors are: The Team, Market potential and competitiveness, Product and technology, Funding and Steering Group. Also the uncertainties related to these factors have been addressed. As a conclusion the study provides the main aspects that should be considered when starting a commercialization research project. Also a combining hierarchical framework has been provided related to the five factors presented. In Chapter 5 the study addresses the main tasks or steps to be taken in order to get public funding for a commercially oriented research project and later on the actual steps to be executed in order to successfully commercialize these high-tech research projects.
Resumo:
Identification of functional properties of wheat flour by specific tests allows genotypes with appropriate characteristics to be selected for specific industrial uses. The objective of wheat breeding programs is to improve the quality of germplasm bank in order to be able to develop wheat with suitable gluten strength and extensibility for bread making. The aim of this study was to evaluate 16 wheat genotypes by correlating both glutenin subunits of high and low molecular weight and gliadin subunits with the physicochemical characteristics of the grain. Protein content, sedimentation volume, sedimentation index, and falling number values were analyzed after the grains were milled. Hectoliter weight and mass of 1000 seeds were also determined. The glutenin and gliadin subunits were separated using polyacrylamide gel in the presence of sodium dodecyl sulfate. The data were evaluated using variance analysis, Pearson's correlation, principal component analysis, and cluster analysis. The IPR 85, IPR Catuara TM, T 091015, and T 091069 genotypes stood out from the others, which indicate their possibly superior grain quality with higher sedimentation volume, higher sedimentation index, and higher mass of 1000 seeds; these genotypes possessed the subunits 1 (Glu-A1), 5 + 10 (Glu-D1), c (Glu-A3), and b (Glu-B3), with exception of T 091069 genotype that possessed the g allele instead of b in the Glu-B3.
Resumo:
PhotoAcoustic Imaging (PAI) is a branch in clinical and pre-clinical imaging, that refers to the techniques mapping acoustic signals caused by the absorption of the short laser pulse. This conversion of electromagnetic energy of the light to the mechanical (acoustic) energy is usually called photoacoustic effect. PAI, by combining optical excitation with acoustical detection, is able to preserve the diffraction limited spatial resolution. At the same time, the penetration depth is extended beyond the diffusive limit. The Laser-Scanning PhotoAcoustic Microscope system (LS-PAM) has been developed, that offers the axial resolution of 7.75 µm with the lateral resolution better than 10 µm. The first in vivo imaging experiments were carried out. Thus, in vivo label-free imaging of the mouse ear was performed. The principle possibility to image vessels located in deep layers of the mouse skin was shown. As well as that, a gold printing sample, vasculature of the Chick Chorioallantoic Membrane Assay, Drosophila larvae were imaged by PAI. During the experimental work, a totally new application of PAM was found, in which the acoustic waves, generated by incident light can be used for further imaging of another sample. In order to enhance the performance of the presented system two main recommendation can be offered. First, the current system should be transformed into reflection-mode setup system. Second, a more powerful source of light with the sufficient repetition rate should be introduced into the system.
Resumo:
Strenx® 960 MC is a direct quenched type of Ultra High Strength Steel (UHSS) with low carbon content. Although this material combines high strength and good ductility, it is highly sensitive towards fabrication processes. The presence of stress concentration due to structural discontinuity or notch will highlight the role of these fabrication effects on the deformation capacity of the material. Due to this, a series of tensile tests are done on both pure base material (BM) and when it has been subjected to Heat Input (HI) and Cold Forming (CF). The surface of the material was dressed by laser beam with a certain speed to study the effect of HI while the CF is done by bending the specimen to a certain angle prior to tensile test. The generated results illustrate the impact of these processes on the deformation capacity of the material, specially, when the material has HI experience due to welding or similar processes. In order to compare the results with those of numerical simulation, LS-DYNA explicit commercial package has been utilized. The generated results show an acceptable agreement between experimental and numerical simulation outcomes.
Resumo:
Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
Global energy consumption has been increasing yearly and a big portion of it is used in rotating electrical machineries. It is clear that in these machines energy should be used efficiently. In this dissertation the aim is to improve the design process of high-speed electrical machines especially from the mechanical engineering perspective in order to achieve more reliable and efficient machines. The design process of high-speed machines is challenging due to high demands and several interactions between different engineering disciplines such as mechanical, electrical and energy engineering. A multidisciplinary design flow chart for a specific type of high-speed machine in which computer simulation is utilized is proposed. In addition to utilizing simulation parallel with the design process, two simulation studies are presented. The first is used to find the limits of two ball bearing models. The second is used to study the improvement of machine load capacity in a compressor application to exceed the limits of current machinery. The proposed flow chart and simulation studies show clearly that improvements in the high-speed machinery design process can be achieved. Engineers designing in high-speed machines can utilize the flow chart and simulation results as a guideline during the design phase to achieve more reliable and efficient machines that use energy efficiently in required different operation conditions.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
In this study, finite element analyses and experimental tests are carried out in order to investigate the effect of loading type and symmetry on the fatigue strength of three different non-load carrying welded joints. The current codes and recommendations do not give explicit instructions how to consider degree of bending in loading and the effect of symmetry in the fatigue assessment of welded joints. The fatigue assessment is done by using effective notch stress method and linear elastic fracture mechanics. Transverse attachment and cover plate joints are analyzed by using 2D plane strain element models in FEMAP/NxNastran and Franc2D software and longitudinal gusset case is analyzed by using solid element models in Abaqus and Abaqus/XFEM software. By means of the evaluated effective notch stress range and stress intensity factor range, the nominal fatigue strength is assessed. Experimental tests consist of the fatigue tests of transverse attachment joints with total amount of 12 specimens. In the tests, the effect of both loading type and symmetry on the fatigue strength is studied. Finite element analyses showed that the fatigue strength of asymmetric joint is higher in tensile loading and the fatigue strength of symmetric joint is higher in bending loading in terms of nominal and hot spot stress methods. Linear elastic fracture mechanics indicated that bending reduces stress intensity factors when the crack size is relatively large since the normal stress decreases at the crack tip due to the stress gradient. Under tensile loading, experimental tests corresponded with finite element analyzes. Still, the fatigue tested joints subjected to bending showed the bending increased the fatigue strength of non-load carrying welded joints and the fatigue test results did not fully agree with the fatigue assessment. According to the results, it can be concluded that in tensile loading, the symmetry of joint distinctly affects on the fatigue strength. The fatigue life assessment of bending loaded joints is challenging since it depends on whether the crack initiation or propagation is predominant.
Resumo:
The increasing emphasis on energy efficiency is starting to yield results in the reduction in greenhouse gas emissions; however, the effort is still far from sufficient. Therefore, new technical solutions that will enhance the efficiency of power generation systems are required to maintain the sustainable growth rate, without spoiling the environment. A reduction in greenhouse gas emissions is only possible with new low-carbon technologies, which enable high efficiencies. The role of the rotating electrical machine development is significant in the reduction of global emissions. A high proportion of the produced and consumed electrical energy is related to electrical machines. One of the technical solutions that enables high system efficiency on both the energy production and consumption sides is high-speed electrical machines. This type of electrical machines has a high system overall efficiency, a small footprint, and a high power density compared with conventional machines. Therefore, high-speed electrical machines are favoured by the manufacturers producing, for example, microturbines, compressors, gas compression applications, and air blowers. High-speed machine technology is challenging from the design point of view, and a lot of research is in progress both in academia and industry regarding the solution development. The solid technical basis is of importance in order to make an impact in the industry considering the climate change. This work describes the multidisciplinary design principles and material development in high-speed electrical machines. First, high-speed permanent magnet synchronous machines with six slots, two poles, and tooth-coil windings are discussed in this doctoral dissertation. These machines have unique features, which help in solving rotordynamic problems and reducing the manufacturing costs. Second, the materials for the high-speed machines are discussed in this work. The materials are among the key limiting factors in electrical machines, and to overcome this limit, an in-depth analysis of the material properties and behavior is required. Moreover, high-speed machines are sometimes operating in a harsh environment because they need to be as close as possible to the rotating tool and fully exploit their advantages. This sets extra requirements for the materials applied.
Resumo:
In this work, the magnetic field penetration depth for high-Tc cuprate superconductors is calculated using a recent Interlayer Pair Tunneling (ILPT) model proposed by Chakravarty, Sudb0, Anderson, and Strong [1] to explain high temperature superconductivity. This model involves a "hopping" of Cooper pairs between layers of the unit cell which acts to amplify the pairing mechanism within the planes themselves. Recent work has shown that this model can account reasonably well for the isotope effect and the dependence of Tc on nonmagnetic in-plane impurities [2] , as well as the Knight shift curves [3] and the presence of a magnetic peak in the neutron scattering intensity [4]. In the latter case, Yin et al. emphasize that the pair tunneling must be the dominant pairing mechanism in the high-Tc cuprates in order to capture the features found in experiments. The goal of this work is to determine whether or not the ILPT model can account for the experimental observations of the magnetic field penetration depth in YBa2Cu307_a7. Calculations are performed in the weak and strong coupling limits, and the efi"ects of both small and large strengths of interlayer pair tunneling are investigated. Furthermore, as a follow up to the penetration depth calculations, both the neutron scattering intensity and the Knight shift are calculated within the ILPT formalism. The aim is to determine if the ILPT model can yield results consistent with experiments performed for these properties. The results for all three thermodynamic properties considered are not consistent with the notion that the interlayer pair tunneling must be the dominate pairing mechanism in these high-Tc cuprate superconductors. Instead, it is found that reasonable agreement with experiments is obtained for small strengths of pair tunneling, and that large pair tunneling yields results which do not resemble those of the experiments.
Resumo:
A general derivation of the anharmonic coefficients for a periodic lattice invoking the special case of the central force interaction is presented. All of the contributions to mean square displacement (MSD) to order 14 perturbation theory are enumerated. A direct correspondance is found between the high temperature limit MSD and high temperature limit free energy contributions up to and including 0(14). This correspondance follows from the detailed derivation of some of the contributions to MSD. Numerical results are obtained for all the MSD contributions to 0(14) using the Lennard-Jones potential for the lattice constants and temperatures for which the Monte Carlo results were calculated by Heiser, Shukla and Cowley. The Peierls approximation is also employed in order to simplify the numerical evaluation of the MSD contributions. The numerical results indicate the convergence of the perturbation expansion up to 75% of the melting temperature of the solid (TM) for the exact calculation; however, a better agreement with the Monte Carlo results is not obtained when the total of all 14 contributions is added to the 12 perturbation theory results. Using Peierls approximation the expansion converges up to 45% of TM• The MSD contributions arising in the Green's function method of Shukla and Hubschle are derived and enumerated up to and including 0(18). The total MSD from these selected contributions is in excellent agreement with their results at all temperatures. Theoretical values of the recoilless fraction for krypton are calculated from the MSD contributions for both the Lennard-Jones and Aziz potentials. The agreement with experimental values is quite good.
Resumo:
We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.
Resumo:
Order parameter profiles extracted from the NMR spectra of model membranes are a valuable source of information about their structure and molecular motions. To al1alyze powder spectra the de-Pake-ing (numerical deconvolution) ~echnique can be used, but it assumes a random (spherical) dist.ribution of orientations in the sample. Multilamellar vesicles are known to deform and orient in the strong magnetic fields of NMR magnets, producing non-spherical orientation distributions. A recently developed technique for simultaneously extracting the anisotropies of the system as well as the orientation distributions is applied to the analysis of partially magnetically oriented 31p NMR spectra of phospholipids. A mixture of synthetic lipids, POPE and POPG, is analyzed to measure distortion of multilamellar vesicles in a magnetic field. In the analysis three models describing the shape of the distorted vesicles are examined. Ellipsoids of rotation with a semiaxis ratio of about 1.14 are found to provide a good approximation of the shape of the distorted vesicles. This is in reasonable agreement with published experimental work. All three models yield clearly non-spherical orientational distributions, as well as a precise measure of the anisotropy of the chemical shift. Noise in the experimental data prevented the analysis from concluding which of the three models is the best approximation. A discretization scheme for finding stability in the algorithm is outlined