943 resultados para fixed point method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Macroporosity is often used in the determination of soil compaction. Reduced macroporosity can lead to poor drainage, low root aeration and soil degradation. The aim of this study was to develop and test different models to estimate macro and microporosity efficiently, using multiple regression. Ten soils were selected within a large range of textures: sand (Sa) 0.07-0.84; silt 0.03-0.24; clay 0.13-0.78 kg kg-1 and subjected to three compaction levels (three bulk densities, BD). Two models with similar accuracy were selected, with a mean error of about 0.02 m³ m-3 (2 %). The model y = a + b.BD + c.Sa, named model 2, was selected for its simplicity to estimate Macro (Ma), Micro (Mi) or total porosity (TP): Ma = 0.693 - 0.465 BD + 0.212 Sa; Mi = 0.337 + 0.120 BD - 0.294 Sa; TP = 1.030 - 0.345 BD 0.082 Sa; porosity values were expressed in m³ m-3; BD in kg dm-3; and Sa in kg kg-1. The model was tested with 76 datum set of several other authors. An error of about 0.04 m³ m-3 (4 %) was observed. Simulations of variations in BD as a function of Sa are presented for Ma = 0 and Ma = 0.10 (10 %). The macroporosity equation was remodeled to obtain other compaction indexes: a) to simulate maximum bulk density (MBD) as a function of Sa (Equation 11), in agreement with literature data; b) to simulate relative bulk density (RBD) as a function of BD and Sa (Equation 13); c) another model to simulate RBD as a function of Ma and Sa (Equation 16), confirming the independence of this variable in relation to Sa for a fixed value of macroporosity and, also, proving the hypothesis of Hakansson & Lipiec that RBD = 0.87 corresponds approximately to 10 % macroporosity (Ma = 0.10 m³ m-3).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Skeletal muscle mitochondrial (Mito) and lipid droplet (Lipid) content are often measured in human translational studies. Stereological point counting allows computing Mito and Lipid volume density (Vd) from micrographs taken with transmission electron microscopes. Former studies are not specific as to the size of individual squares that make up the grids, making reproducibility difficult, particularly when different magnifications are used. Our objective was to determine which size grid would be best at predicting fractional volume efficiently without sacrificing reliability and to test a novel method to reduce sampling bias. Methods: ten subjects underwent vastus lateralis biopsies. Samples were fixed, embedded, and cut longitudinally in ultrathin sections of 60 nm. Twenty micrographs from the intramyofibrillar region were taken per subject at Ã-33,000 magnification. Different grid sizes were superimposed on each micrograph: 1,000 Ã- 1,000 nm, 500 Ã- 500 nm, and 250 Ã- 250 nm. Results: mean Mito and Lipid Vd were not statistically different across grids. Variability was greater when going from 1,000 Ã- 1,000 to 500 Ã- 500 nm grid than from 500 Ã- 500 to 250 Ã- 250 nm grid. Discussion: this study is the first to attempt to standardize grid size while keeping with the conventional stereology principles. This is all in hopes of producing replicable assessments that can be obtained universally across different studies looking at human skeletal muscle mitochondrial and lipid droplet content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethyl glucuronide (EtG) is a minor and direct metabolite of ethanol. EtG is incorporated into the growing hair allowing retrospective investigation of chronic alcohol abuse. In this study, we report the development and the validation of a method using gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS) for the quantification of EtG in hair. EtG was extracted from about 30 mg of hair by aqueous incubation and purified by solid-phase extraction (SPE) using mixed mode extraction cartridges followed by derivation with perfluoropentanoic anhydride (PFPA). The analysis was performed in the selected reaction monitoring (SRM) mode using the transitions m/z 347-->163 (for the quantification) and m/z 347-->119 (for the identification) for EtG, and m/z 352-->163 for EtG-d(5) used as internal standard. For validation, we prepared quality controls (QC) using hair samples taken post mortem from 2 subjects with a known history of alcoholism. These samples were confirmed by a proficiency test with 7 participating laboratories. The assay linearity of EtG was confirmed over the range from 8.4 to 259.4 pg/mg hair, with a coefficient of determination (r(2)) above 0.999. The limit of detection (LOD) was estimated with 3.0 pg/mg. The lower limit of quantification (LLOQ) of the method was fixed at 8.4 pg/mg. Repeatability and intermediate precision (relative standard deviation, RSD%), tested at 4 QC levels, were less than 13.2%. The analytical method was applied to several hair samples obtained from autopsy cases with a history of alcoholism and/or lesions caused by alcohol. EtG concentrations in hair ranged from 60 to 820 pg/mg hair.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compared the outcome of total knee replacement (TKR) in adult patients with fixed- and mobile-bearing prostheses during the first post-operative year and at five years' follow-up, using gait parameters as a new objective measure. This double-blind randomised controlled clinical trial included 55 patients with mobile-bearing (n = 26) and fixed-bearing (n = 29) prostheses of the same design, evaluated pre-operatively and post-operatively at six weeks, three months, six months, one year and five years. Each participant undertook two walking trials of 30 m and completed the EuroQol questionnaire, Western Ontario and McMaster Universities osteoarthritis index, Knee Society score, and visual analogue scales for pain and stiffness. Gait analysis was performed using five miniature angular rate sensors mounted on the trunk (sacrum), each thigh and calf. The study population was divided into two groups according to age (≤ 70 years versus > 70 years). Improvements in most gait parameters at five years' follow-up were greater for fixed-bearing TKRs in older patients (> 70 years), and greater for mobile-bearing TKRs in younger patients (≤ 70 years). These findings should be confirmed by an extended age controlled study, as the ideal choice of prosthesis might depend on the age of the patient at the time of surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction : Doublecortin (DCX) is a microtubule associated protein expressed by migrating neural precursors. DCX is also expressed in approximately 4% of all cortical cells in adult normal primate brain. DCX expression is also enhanced locally in response to an acute insult made to the brain. This is thought to play a role in plasticity or neural repair. That being said, it would be interesting to know how the expression of DCX is modified in a more chronic insult, like in neurodegeneration such as in Parkinson's Disease (PD) and Alzheimer's Disease (AD). The aim of my study is to study the expression of DCX cells in the cortex of patients having a neurodegenerative disease, compared to control patients. Method: DCX cells quantification on 9 DCX‐stained 5 μm thick formalin fixed paraffin embedded brain sections: 3 Alzheimer's disease patients, 3 Parkinson's disease patients and 3 control patients. Each patient had several sections that we could stain with different stainings (GALLYA, TAU, DCX). By using a computerized image analysis system (Explora Nova, La Rochelle, France), cortical columns were selected on areas on the cortex with a lot of degeneration subjectively observed on GALLYA stained sections and on TAU stained sections. Then total number of cells was counted on TAU sections, where all nuclei were colored in blue. Then the DCX cells were counted on the corresponding DCX sections. These values were standardized to a reference surface area. The ratio of DCX cells over total cells was then calculated. Results : There is a difference of DCX cell expression between Alzheimer's Disease patients and control patients. The percentage of dcx cells in the cortex of an Alzheimer's patient is around 12.54% ± 2.17%, where as in the cortex of control patients, it is around 5.47% ± 0.83%. On the other hand, there is no significant difference in the ratio of DCX cells over total cells between parkinson's patients and control patients, both having around 5% of DCX cells. Discussion: There is a dramatic increase of DCX expression in AD (12.5%) compared to PD and controls (5.5%). The increase in DCX ratio in AD may have two potential causes: 1.The increased ratio is due to DCX cells being more resistant to degeneration compared to surrounding cells which are degenerating due to AD, leading to the cortical atrophy observed in AD patients. So the decrease of total cells without any change in the number of DCX cells makes the ratio bigger in AD compared to the controls. 2.The increased ratio is due to an actual increase in DCX cells. This means that there is some neural repair to compensate the degenerative process, just like the repair process observed in acute lesions to the brain. This second idea can be integrated in the broader point of view of neuroinflammation. The progression of the disease would trigger neuroinflammation and the process following the primary inflammatory response which is neural repair. So our study can show that the increase in DCX cells is an attempt to repair the degenerated neurons, in the context of neuroinflammation triggered by the physiopathological progression of the disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yritykset ovat pakotettuja erilaisiin yhteistyömuotoihin pärjätäkseen kiristyvässä kilpailussa. Yhteistyösuhteet kulkevat eri nimillä riippuen teollisuuden alasta ja siitä, missä kohtaa toimitusketjua ne toteutuvat, mutta periaatteessa kaikki pohjautuvat samaan ideaan kuin Vendor Managed Inventory (VMI); varastoon jakysyntään liittyvä tieto jaetaan toimitusketjun eri osapuolien kesken, jotta tuotanto, jakelu ja varastonhallinta olisi mahdollista optimoida. Vendor Managed Inventory on ideana yksinkertainen, mutta vaatii onnistuakseen paljon. Perusolettamus on, että toimittajan on kyettävä hallinnoimaan asiakkaan varastoa paremmin kuin asiakas itse. Tämä ei kuitenkaan ole mahdollista ilman riittävää yhteistyötä, oikeanlaista informaatiota tai sopivia tuoteominaisuuksia. Tämän työn tarkoitus on esitellä kriittiset menestystekijät valmistajan kannalta, kun näkyvyys todelliseen kysyntään on heikko ja kyseessäolevat tuotteet ovat ominaisuuksiltaan toimintamalliin huonosti soveltuvia. VMItoimintamallin soveltuvuus matkapuhelimia valmistavan yrityksen liiketoimintaan, sekä sen vaikutus asiakasyhteistyöhön, kannattavuuteen ja toiminnan tehostamiseen on myös tutkittu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Viime vuosikymmenien aikana kommunikaatioteknologiat ovat kehittyneet erittäin paljon. Uusia verkkoja, liityntätekniikoita, protokollia ja päätelaitteita on luotu alati kehittyvällä vauhdilla, eikä hidastumisen merkkejä ole näkyvissä. Varsinkin mobiilisovellukset ovat kasvattaneet markkinaosuuksiaan viime aikoina. Unlicensed MobileAccess (UMA) on uusi liityntätekniikka mobiilipäätelaitteille, joka mahdollistaa liitynnän GSM- runkoverkkoon WLAN- tai Bluetooth - tekniikoiden avulla. Tämä diplomityö keskittyy UMAan liittyviin teknologioihin, joita tarkastellaan lähemmin ensimmäisissä kappaleissa. Tavoitteena on esitellä, mitä UMA merkitsee, ja kuinka eri tekniikoita voidaan soveltaa sen toteutuksissa. Ennenkuin uusia teknologioita voidaan soveltaa kaupallisesti, täytyy niiden olla kokonaisvaltaisesti testattuja. Erilaisia testausmenetelmiä sovelletaan laitteistonja ohjelmiston testaukseen, mutta tavoite on kuitenkin sama, eli vähentää testattavan tuotteen epäluotettavuutta ja lisätä sen laatua. Vaikka UMA käsittääkin pääasiassa jo olemassa olevia tekniikoita, tuo se silti mukanaan uuden verkkoelementin ja kaksi uutta kommunikaatioprotokollaa. Ennen kuin mitään UMAa tukevia ratkaisuja voidaan tuoda markkinoille, monia erilaisia testausmenetelmiä on suoritettava, jotta varmistutaan uuden tuotteen oikeasta toiminnallisuudesta. Koska tämä diplomityö käsittelee uutta tekniikkaa, on myös testausmenetelmien yleisen testausteorian käsittelemiselle varattu oma kappale. Kappale esittelee erilaisia testauksen näkökulmia ja niihin perustuen rakennetaan myös testausohjelmisto. Tavoitteena on luoda ohjelmisto, jota voidaan käyttää UMA-RR protokollan toiminnan varmentamiseen kohdeympäristössä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the quasifission paths predicted by the one-body dissipation dynamics, in the slowest phase of a binary reaction, follow a quasistatic path, which represents a sequence of states of thermal equilibrium at a fixed value of the deformation coordinate. This establishes the use of the statistical particle-evaporation model in the case of dynamical time-evolving systems. Pre- and post-scission multiplicities of neutrons and total multiplicities of protons and α particles in fission reactions of 63Cu+92Mo, 60Ni+100Mo, 63Cu+100Mo at 10 MeV/u and 20Ne+144,148,154Sm at 20 MeV/u are reproduced reasonably well with statistical model calculations performed along dynamic trajectories whose slow stage (from the most compact configuration up to the point where the neck starts to develop) lasts some 35×10−21 s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the thesis was to create a framework that can be used to define a manufacturing strategy taking advantage of the product life cycle method, which enables PQP enhancements. The starting point was to study synkron implementation of cost leadership and differentiation strategies in different stages of the life cycles. It was soon observed that Porter’s strategies were too generic for the complex and dynamic environment where customer needs deviate market and product specifically. Therefore, the strategy formulation process is based on the Terry Hill’s order-winner and qualifier concepts. The manufacturing strategy formulation is initiated with the definition of order-winning and qualifying criteria. From these criteria there can be shaped product specific proposals for action and production site specific key manufacturing tasks that they need to answer in order to meet customers and markets needs. As a future research it is suggested that the process of capturing order-winners and qualifiers should be developed so that the process would be simple and streamlined at Wallac Oy. In addition, defined strategy process should be integrated to the PerkinElmer’s SGS process. SGS (Strategic Goal Setting) is one of the PerkinElmer’s core management processes. Full Text: Null