925 resultados para Free-space method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the thesis is to generate scenarios of future purposes and of use of ships, suitable for STX Finland Cruise Oy to design and build, over a 50 year time span by applying the Delphi method and an open innovation approach in a future workshop. The scenarios were mapped out with help of two Delphi survey rounds and one future workshop. The number of participants in both surveys and the workshop was some twenty experts in each, representing various fields. On the basis of the first survey round, four different subject areas were selected for analysis: purposes for the use of ships; energy efficiency of cruises and ships; cost efficiency of sea transportation and vacation; and the views and expectations of the customers in the future. As a result of the future workshop, four different themes were established, which were studied further during the second Delphi round. The themes are future service and operation concepts; versatile uses of the space in ships; communication of environmental benefits of ships, future energy solutions and social interaction between passengers onboard. In addition to generating the scenarios, further aim of the thesis is to implement the Delphi method and workshop activity as foresight tools for STX Europe and to produce a chart of a future shipbuilding foresight community to can serve the open innovation processes in the maritime cluster as a whole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel sensitive and relatively selective kinetic method is presented for the determination of V(V), based on its catalytic effect on the oxidation reaction of Ponceau Xylydine by potassium bromate in presence of 5-sulfosalicylic acid (SSA) as activator. The reaction was monitored spectrophotometrically by measuring the decrease in absorbance of Ponceau Xylydine at 640 nm between 0.5 to 7 min (the fixed time method) in H3PO4 medium at 25ºC. The effect of various parameters such as concentrations of H3PO4, SSA, bromate and Ponceau Xylydine, temperature and ionic strength on the rate of net reaction were studied. The method is free from most interferences, especially from large amounts of V(IV). The decrease in absorbance is proportional to the concentration of V(V) over the entire concentration range tested (1-15 ng mL−1) with a detection limit of 0.46 ng mL-1 (according to statistical 3Sblank/k criterion) and a coefficient of variation (CV) of 1.8% (for ten replicate measurement at 95% confidence level). The proposed method suffers few interferences such as Cr(VI) and Hg(II) ions. The method was successfully applied to the determination of V(V) in tap water, drinking water, bottled mineral water samples and a certified standard reference material such as SRM-1640 with satisfactory results. The vanadium contents of water samples were also determined by FAAS for a comparison. The recovery of spiked vanadium(V) was found to be quantitative and the reproducibility was satisfactory. It was observed that the results of the SRM 1640 were in good agreement with the certified value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pulsed electroacoustic (PEA) method is a commonly used non-destructive technique for investigating space charges. It has been developed since early 1980s. These days there is continuing interest for better understanding of the influence of space charge on the reliability of solid electrical insulation under high electric field. The PEA method is widely used for space charge profiling for its robust and relatively inexpensive features. The PEA technique relies on a voltage impulse used to temporarily disturb the space charge equilibrium in a dielectric. The acoustic wave is generated by charge movement in the sample and detected by means of a piezoelectric film. The spatial distribution of the space charge is contained within the detected signal. The principle of such a system is already well established, and several kinds of setups have been constructed for different measurement needs. This thesis presents the design of a PEA measurement system as a systems engineering project. The operating principle and some recent developments are summarised. The steps of electrical and mechanical design of the instrument are discussed. A common procedure for measuring space charges is explained and applied to verify the functionality of the system. The measurement system is provided as an additional basic research tool for the Corporate Research Centre of ABB (China) Ltd. It can be used to characterise flat samples with thickness of 0.2–0.5 mm under DC stress. The spatial resolution of the measurement is 20 μm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preparation of optically active compounds is of high importance in modern medicinal chemistry. Despite recent advances in the field of asymmetric synthesis, resolution of racemates still remains the most utilized way for preparation of single enantiomers in industrial scale due to its cost-efficiency and simplicity. Enzymatic kinetic resolution (KR) of racemates is a classical method for separation of enantiomers. One of its drawbacks is the limitation of target enantiomer yield to 50%. Dynamic Kinetic Resolution (DKR) allows to reach yields up to 100% by in situ racemization of the less reactive enantiomer. In the first part of this thesis, a number of half-sandwich ruthenium complexes were prepared and evaluated as catalysts for racemization of optically active secondary alcohols. A leading catalyst, Bn5CpRu(CO)2Cl, was identified. The catalyst discovered was extensively characterized by its application for DKR of a broad range of secondary alcohols in a wide range of reaction loadings (1 mmol – 1 mol). Cost-efficient chromatography-free procedure for preparation of this catalyst was developed. Further, detailed kinetic and mechanistic studies of the racemization reactions were performed. Comparison of racemization rates in the presence of Bn5CpRu(CO)2Cl and Ph5CpRu(CO)2Cl catalysts reveals that the performance of the catalytic system can be adjusted by matching of the electronic properties of the catalysts and the substrates. Moreover, dependence of the rate-limiting step from the electronic properties of the reagents was observed. Important conclusions about reaction mechanism were made. Finally, an alternative approach to DKR of amines based on space separated vessels was addressed. This procedure allows the combination of thermolabile enzyme with racemization catalysts active only at high temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to investigate the occurrence of Toxoplasma gondii and compare the results obtained in the Modified Agglutination Test (MAT), Polimerase Chain Reaction (PCR) and bioassay in mice. In order to accomplish this, 40 free-range chickens from eight farms in neighboring areas to the Pantanal in Nhecolândia, Mato Grosso do Sul, were euthanized and blood samples, brain and heart were collected. The occurrence of anti-T. gondii antibodies found in chickens was 67.5% (27 samples), considering as a cutoff point the dilution 1:5. Among the samples analyzed, 7 (25.9%) were positive in the dilution 1:5, 3 (11.1%) in 1:10, 2 (7.4%) in 1:20, 3 (11.1%) in 1:320, 1 ( 3.7%) in 1:640, 3 (11.1%) in 1:1280, 2 (7.4%) in 1:2560, 4 (14.8%) in 1:5120 and 2 (7.4%) in 1:10.240. From the mixture of tissue samples (brain and heart) from the chickens analyzed, 16 (40%) presented electrophoretic bands compatible with T. gondii by PCR (gene B1). In the comparison of techniques, 59.26% positivity in PCR was revealed among animals that were seropositive in MAT (cutoff 1:5). From 141 inoculated mice, six (4.44%) died of acute toxoplasmosis between 15 and 23 days after inoculation. Surviving mice were sacrificed at 74 days after inoculation, and a total of 28 cysts were found in the brains of 10 distinct groups. From the seropositive hens, 27 bioassays were performed and 11 (40.7%) isolates were obtained. A greater number of isolations happened in mice that were inoculated with tissues from chickens that had high titers for anti-T. gondii antibodies. Chronic infection in mice was observed in nine groups (33.3%) from five different properties. Among the surviving mice, 25.6% were positive for T. gondii in MAT (1:25). From mice positive in PCR, 87.5% were also positive in MAT. Among the PCR-negative mice, 5.2% were positive for T. gondii in MAT. It can be concluded through this study that the occurrence of infecton by T. gondii in the rural properties studied was high, that PCR directed to gene B1 does not confirm the viability of the parasite, but it can be used as a screening method for the selection of chickens infected by T. gondii, that the animals with titer greater than 10 must be prioritized for the selection of animals for bioassay, since for them, the chances of isolating the parasite are greater and that seroconversion in experimentally infected mice is not a good indicator for isolating the agent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The determination of the intersection curve between Bézier Surfaces may be seen as the composition of two separated problems: determining initial points and tracing the intersection curve from these points. The Bézier Surface is represented by a parametric function (polynomial with two variables) that maps a point in the tridimensional space from the bidimensional parametric space. In this article, it is proposed an algorithm to determine the initial points of the intersection curve of Bézier Surfaces, based on the solution of polynomial systems with the Projected Polyhedral Method, followed by a method for tracing the intersection curves (Marching Method with differential equations). In order to allow the use of the Projected Polyhedral Method, the equations of the system must be represented in terms of the Bernstein basis, and towards this goal it is proposed a robust and reliable algorithm to exactly transform a multivariable polynomial in terms of power basis to a polynomial written in terms of Bernstein basis .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper gives a detailed presentation of the Substitution-Newton-Raphson method, suitable for large sparse non-linear systems. It combines the Successive Substitution method and the Newton-Raphson method in such way as to take the best advantages of both, keeping the convergence features of the Newton-Raphson with the low requirements of memory and time of the Successive Substitution schemes. The large system is solved employing few effective variables, using the greatest possible part of the model equations in substitution fashion to fix the remaining variables, but maintaining the convergence characteristics of the Newton-Raphson. The methodology is exemplified through a simple algebraic system, and applied to a simple thermodynamic, mechanical and heat transfer modeling of a single-stage vapor compression refrigeration system. Three distinct approaches for reproducing the thermodynamic properties of the refrigerant R-134a are compared: the linear interpolation from tabulated data, the use of polynomial fitted curves and the use of functions derived from the Helmholtz free energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study, histopathological analysis of rat mesentery was used to quantify the effect of two anti-inflammatory agents, dexamethasone (Dex) and pertussis toxin (Ptx), on leukocyte migration. The intravenous injection of Dex (1 mg/kg) and Ptx (1,200 ng) 1 h prior to the intraperitoneal injection of the inflammatory stimuli lipopolysaccharide (LPS) or formyl-methionyl-leucyl-phenylalanine (fMLP) significantly reduced the neutrophil diapedesis (LPS: Ptx = 0.86 ± 0.19 and Dex = 0.35 ± 0.13 vs saline (S) = 2.85 ± 0.59; fMLP: Ptx = 0.43 ± 0.09 and Dex 0.01 ± 0.01 vs S = 1.08 ± 0.15 neutrophil diapedesis/field) and infiltration (LPS: Ptx = 6.29 ± 1.4 and Dex = 3.06 ± 0.76 vs S = 15.94 ± 3.97; fMLP: Ptx = 3.85 ± 0.56 and Dex = 0.40 ± 0.16 vs S = 7.15 ± 1.17 neutrophils/field) induced by the two agonists in the rat mesentery. The inhibitory effect of Dex and Ptx was clearly visible in the fields nearest the venule (up to 200 µm), demonstrating that these anti-inflammatory agents act preferentially in the transmigration of neutrophils from the vascular lumen into the interstitial space, but not in cell movement in response to a haptotactic gradient. The mesentery of rats pretreated with Dex showed a decreased number of neutrophils within the venules (LPS: Dex = 1.50 ± 0.38 vs S = 4.20 ± 1.01; fMLP: Dex = 0.25 ± 0.11 vs S = 2.20 ± 0.34 neutrophils in the lumen/field), suggesting that this inhibitor may be acting at a step that precedes neutrophil arrival in the inflamed tissue. In contrast to that observed with Dex treatment, the number of neutrophils found in mesenteric venules was significantly elevated in animals pretreated with Ptx (LPS: Ptx = 9.85 ± 2.25 vs S = 4.20 ± 1.01; fMLP: Ptx = 4.66 ± 1.24 vs S = 2.20 ± 0.34 neutrophils in the lumen/field). This discrepancy shows that Ptx and Dex act via different mechanisms and suggests that Ptx prevents locomotion of neutrophils from the vascular lumen to the interstitial space. In conclusion, the method described here is useful for quantifying the inflammatory and anti-inflammatory effect of different substances. The advantage of this histopathological approach is that it provides additional information about the steps involved in leucocyte migration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interaction mean free path between neutrons and TRISO particles is simulated using scripts written in MATLAB to solve the increasing error present with an increase in the packing factor in the reactor physics code Serpent. Their movement is tracked both in an unbounded and in a bounded space. Their track is calculated, depending on the program, linearly directly using the position vectors of the neutrons and the surface equations of all the fuel particles; by dividing the space in multiple subspaces, each of which contain a fraction of the total number of particles, and choosing the particles from those subspaces through which the neutron passes through; or by choosing the particles that lie within an infinite cylinder formed on the movement axis of the neutron. The estimate from the current analytical model, based on an exponential distribution, for the mean free path, utilized by Serpent, is used as a reference result. The results from the implicit model in Serpent imply a too long mean free path with high packing factors. The received results support this observation by producing, with a packing factor of 17 %, approximately 2.46 % shorter mean free path compared to the reference model. This is supported by the packing factor experienced by the neutron, the simulation of which resulted in a 17.29 % packing factor. It was also observed that the neutrons leaving from the surfaces of the fuel particles, in contrast to those starting inside the moderator, do not follow the exponential distribution. The current model, as it is, is thus not valid in the determination of the free path lengths of the neutrons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free and total carnitine quantification is important as a complementary test for the diagnosis of unusual metabolic diseases, including fatty acid degradation disorders. The present study reports a new method for the quantification of free and total carnitine in dried plasma specimens by isotope dilution electrospray tandem mass spectrometry with sample derivatization. Carnitine is determined by looking for the precursor of ions of m/z = 103 of N-butylester derivative, and the method is validated by comparison with radioenzymatic assay. We obtained an inter- and intra-day assay coefficient of variation of 4.3 and 2.3, respectively. Free and total carnitine was analyzed in 309 dried plasma spot samples from children ranging in age from newborn to 14 years using the new method, which was found to be suitable for calculating reference age-related values for free and total carnitine (less than one month: 19.3 ± 2.4 and 23.5 ± 2.9; one to twelve months: 28.8 ± 10.2 and 35.9 ± 11.4; one to seven years: 30.7 ± 10.3 and 38.1 ± 11.9; seven to 14 years: 33.7 ± 11.6, and 43.1 ± 13.8 µM, respectively). No difference was found between males and females. A significant difference was observed between neonates and the other age groups. We compare our data with reference values in the literature, most of them obtained by radioenzymatic assay. However, this method is laborious and time consuming. The electrospray tandem mass spectrometry method presented here is a reliable, rapid and automated procedure for carnitine quantitation.