939 resultados para reduced-order observers
Resumo:
We compute up to and including all the c-2 terms in the dynamical equations for extended bodies interacting through electromagnetic, gravitational, or short-range fields. We show that these equations can be reduced to those of point particles with intrinsic angular momentum assuming spherical symmetry.
Resumo:
PURPOSE: To evaluate the technical quality and the diagnostic performance of a protocol with use of low volumes of contrast medium (25 mL) at 64-detector spiral computed tomography (CT) in the diagnosis and management of adult, nontraumatic subarachnoid hemorrhage (SAH). MATERIALS AND METHODS: This study was performed outside the United States and was approved by the institutional review board. Intracranial CT angiography was performed in 73 consecutive patients with nontraumatic SAH diagnosed at nonenhanced CT. Image quality was evaluated by two observers using two criteria: degree of arterial enhancement and venous contamination. The two independent readers evaluated diagnostic performance (lesion detection and correct therapeutic decision-making process) by using rotational angiographic findings as the standard of reference. Sensitivity, specificity, and positive and negative predictive values were calculated for patients who underwent CT angiography and three-dimensional rotational angiography. The intraclass correlation coefficient was calculated to assess interobserver concordance concerning aneurysm measurements and therapeutic management. RESULTS: All aneurysms were detected, either ruptured or unruptured. Arterial opacification was excellent in 62 cases (85%), and venous contamination was absent or minor in 61 cases (84%). In 95% of cases, CT angiographic findings allowed optimal therapeutic management. The intraclass correlation coefficient ranged between 0.93 and 0.95, indicating excellent interobserver agreement. CONCLUSION: With only 25 mL of iodinated contrast medium focused on the arterial phase, 64-detector CT angiography allowed satisfactory diagnostic and therapeutic management of nontraumatic SAH.
Resumo:
Diplomityön tavoitteena oli kehittää Fazer Suklaan oston tilaus-toimitusprosesseja, jotta raaka-aineet ja pakkausmateriaalit pystytään hoitamaan mahdollisimman tehokkaasti. Ensin selvitettiin kirjallisuuden avulla tilaus-toimitusprosessin päävaiheet ja niihin vaikuttavat tekijät. Empiirisessä osassa lähdettiin liikkeelle käymällä läpi Fazer Suklaan oston nykytilanne, joka tehtiin ostajille suunnatun aikaselvityksen, haastatteluiden ja nykyisten tilaus-toimitusprosessien kuvaamisen avulla. Tavoitetilanteen rakentaminen aloitettiin ostettavien materiaalien ja toimittajien luokittelemisella. Tämän perusteella nämä materiaalit voitiin jakaa kolmen eri tilaus toimitusprosessin alle. Automaattisessa tilaus-toimitusprosessissa eri vaiheet automatisoidaan yhdessäavain toimittajien kanssa. Puoliautomaattinen prosessi perustuu systeemiin, jossa toimittaja näkee internetin kautta Fazerin tuotantosuunnitelman ja tekee tämän perusteella materiaalien täydennykset. Yksinkertaisessa prosessissa ostoarvoltaan alhaiset materiaalit hoidetaan mahdollisimman lähellä käyttöpistettä ja prosessin vaiheet tehdään mandollisimman pienellä työmäärällä. Tavoiteprosessien implementoinnilla todettiin suurimmiksi eduiksi prosessivaiheiden vähentyminen ja manuaalisen työn automatisoituminen. Tätä kautta saatiin prosessin eri vaiheiden työmäärää vähennettyä, sekä alennettua varastotasoja ja näin tilaus toimitusprosessin kokonaiskustannuksia pystyttiin pienentämään.
Resumo:
Observers are often required to adjust actions with objects that change their speed. However, no evidence for a direct sense of acceleration has been found so far. Instead, observers seem to detect changes in velocity within a temporal window when confronted with motion in the frontal plane (2D motion). Furthermore, recent studies suggest that motion-in-depth is detected by tracking changes of position in depth. Therefore, in order to sense acceleration in depth a kind of second-order computation would have to be carried out by the visual system. In two experiments, we show that observers misperceive acceleration of head-on approaches at least within the ranges we used [600-800 ms] resulting in an overestimation of arrival time. Regardless of the viewing condition (only monocular or monocular and binocular), the response pattern conformed to a constant velocity strategy. However, when binocular information was available, overestimation was highly reduced.
Resumo:
Further genetic gains in wheat yield are required to match expected increases in demand. This may require the identification of physiological attributes able to produce such improvement, as well as the genetic bases controlling those traits in order to facilitate their manipulation. In the present paper, a theoretical framework of source and sink limitation to wheat yield is presented and the fine-tuning of crop development as an alternative for increasing yield potential is discussed. Following a top-down approach, most crop physiologists have agreed that the main attribute explaining past genetic gains in yield was harvest index (HI). By virtue of previous success, no further gains may be expected in HI and an alternative must be found. Using a bottom-up approach, the present paper firstly provides evidence on the generalized sink-limited condition of grain growth, determining that for further increases in yield potential, sink strength during grain filling has to be increased. The focus should be on further increasing grain number per m2, through fine-tuning pre-anthesis developmental patterns. The phase of rapid spike growth period (RSGP) is critical for grain number determination and increasing spike growth during pre-anthesis would result in an increased number of grains. This might be achieved by lengthening the duration of the phase (though without altering flowering time), as there is genotypic variation in the proportion of pre-anthesis time elapsed either before or after the onset of the stem elongation phase. Photoperiod sensitivity during RSGP could be then used as a genetic tool to further increase grain number, since slower development results in smoother floret development and more floret primordia achieve the fertile floret stage, able to produce a grain. Far less progress has been achieved on the genetic control of this attribute. None of the well-known major Ppd alleles seems to be consistently responsible for RSGP sensitivity. Alternatives for identifying the genetic factors responsible for this sensitivity (e.g. quantitative trait locus (QTL) identification in mapping populations) are being considered.
Resumo:
Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
In this work we present the formulas for the calculation of exact three-center electron sharing indices (3c-ESI) and introduce two new approximate expressions for correlated wave functions. The 3c-ESI uses the third-order density, the diagonal of the third-order reduced density matrix, but the approximations suggested in this work only involve natural orbitals and occupancies. In addition, the first calculations of 3c-ESI using Valdemoro's, Nakatsuji's and Mazziotti's approximation for the third-order reduced density matrix are also presented for comparison. Our results on a test set of molecules, including 32 3c-ESI values, prove that the new approximation based on the cubic root of natural occupancies performs the best, yielding absolute errors below 0.07 and an average absolute error of 0.015. Furthemore, this approximation seems to be rather insensitive to the amount of electron correlation present in the system. This newly developed methodology provides a computational inexpensive method to calculate 3c-ESI from correlated wave functions and opens new avenues to approximate high-order reduced density matrices in other contexts, such as the contracted Schrödinger equation and the anti-Hermitian contracted Schrödinger equation
Resumo:
To study Assessing the impact of tillage practices on soil carbon losses dependents it is necessary to describe the temporal variability of soil CO2 emission after tillage. It has been argued that large amounts of CO2 emitted after tillage may serve as an indicator for longer-term changes in soil carbon stocks. Here we present a two-step function model based on soil temperature and soil moisture including an exponential decay in time component that is efficient in fitting intermediate-term emission after disk plow followed by a leveling harrow (conventional), and chisel plow coupled with a roller for clod breaking (reduced) tillage. Emission after reduced tillage was described using a non-linear estimator with determination coefficient (R²) as high as 0.98. Results indicate that when emission after tillage is addressed it is important to consider an exponential decay in time in order to predict the impact of tillage in short-term emissions.
Resumo:
Lactofen is a diphenylether herbicide recommended to control broad-leaved weeds in soybean (Glycine max) fields and its mechanism of action is the inhibition of protoporphyrinogen-IX oxidase (Protox), which acts in the chlorophyll biosynthesis. This inhibition results in an accumulation of protoporphyrin-IX, which leads to the production of reactive oxygen species (ROS) that cause oxidative stress. Consequently, spots, wrinkling and leaf burn may occur, resulting in a transitory crop growth interruption. However, nitric oxide (NO) acts as an antioxidant in direct ROS scavenging. Thus, the aim of this work was to verify, through phytometric and biochemical evaluations, the protective effect of NO in soybean plants treated with the herbicide lactofen. Soybean plants were pre-treated with different levels of sodium nitroprusside (SNP), a NO-donor substance, and then sprayed with 168 g a.i. ha-1 lactofen. Pre-treatment with SNP was beneficial because NO decreased the injury symptoms caused by lactofen in young leaflets and kept low the soluble sugar levels. Nevertheless, NO caused slower plant growth, which indicates that further studies are needed in order to elucidate the action mechanisms of NO in signaling the stress caused by lactofen in soybean crop.
Resumo:
Cholecystokinin (CCK) influences gastrointestinal motility, by acting on central and peripheral receptors. The aim of the present study was to determine whether CCK has any effect on isolated duodenum longitudinal muscle activity and to characterize the mechanisms involved. Isolated segments of the rat proximal duodenum were mounted for the recording of isometric contractions of longitudinal muscle in the presence of atropine and guanethidine. CCK-8S (EC50: 39; 95% CI: 4.1-152 nM) and cerulein (EC50: 58; 95% CI: 18-281 nM) induced a concentration-dependent and tetrodotoxin-sensitive relaxation. Nomeganitro-L-arginine (L-NOARG) reduced CCK-8S- and cerulein-induced relaxation (IC50: 5.2; 95% CI: 2.5-18 µM) in a concentration-dependent manner. The magnitude of 300 nM CCK-8S-induced relaxation was reduced by 100 µM L-NOARG from 73 ± 5.1 to 19 ± 3.5% in an L-arginine but not D-arginine preventable manner. The CCK-1 receptor antagonists proglumide, lorglumide and devazepide, but not the CCK-2 receptor antagonist L-365,260, antagonized CCK-8S-induced relaxation in a concentration-dependent manner. These findings suggest that CCK-8S and cerulein activate intrinsic nitrergic nerves acting on CCK-1 receptors in order to cause relaxation of the rat duodenum longitudinal muscle.
Resumo:
Tämän kandidaatinyön tavoitteena on selvittää keinoja joilla ETO-yhtiö voi kehittää tuotettaan ja tuotantoaan kohti massakustomointi. Lisäksi selvitetään mitkä asiat vaikuttavat asiakastilauksen kytkentäpisteen asettamiseen siirtyessä massakustomointiin. Työ on tehty kirjallisuuskatsauksena. Esitettyjen tietojen ja tulosten pohjana on alan kirjallisuus sekä julkaistut artikkelit. Työn perusteella voidaan todeta että parhaimmat keinot massakustomoinnin tavoitteluun ETO-yhtiölle ovat; tuotannon ja tuotteiden kehittäminen siten että pystytään hyödyntämään modularisointia ja komponenttien standardointia, lisäksi tuotesuunnitteluun käytettävää aikaa tulee vähentää automatisoimalla tuotesuunnittelua tai käyttämällä standardi suunnitelmia. ETO-yhtiössä siirtyessä massakustomointiin tulee asiakastilauksen kytkentäpisteen paikkaa asetettaessa ottaa huomioon tuotannon ja suunnittelun ulottuvuus kytkettynä asiakkaan vaatimuksiin.
Resumo:
Regulatory light chain (RLC) phosphorylation in fast twitch muscle is catalyzed by skeletal myosin light chain kinase (skMLCK), a reaction known to increase muscle force, work, and power. The purpose of this study was to explore the contribution of RLC phosphorylation on the power of mouse fast muscle during high frequency (100 Hz) concentric contractions. To determine peak power shortening ramps (1.05 to 0.90 Lo) were applied to Wildtype (WT) and skMLCK knockout (skMLCK-/-) EDL muscles at a range of shortening velocities between 0.05-0.65 of maximal shortening velocity (Vmax), before and after a conditioning stimulus (CS). As a result, mean power was increased to 1.28 ± 0.05 and 1.11 ± .05 of pre-CS values, when collapsed for shortening velocity in WT and skMLCK-/-, respectively (n = 10). In addition, fitting each data set to a second order polynomial revealed that WT mice had significantly higher peak power output (27.67 ± 1.12 W/ kg-1) than skMLCK-/- (25.97 ± 1.02 W/ kg-1), (p < .05). No significant differences in optimal velocity for peak power were found between conditions and genotypes (p > .05). Analysis with Urea Glycerol PAGE determined that RLC phosphate content had been elevated in WT muscles from 8 to 63 % while minimal changes were observed in skMLCK-/- muscles: 3 and 8 %, respectively. Therefore, the lack of stimulation induced increase in RLC phosphate content resulted in a ~40 % smaller enhancement of mean power in skMLCK-/-. The increase in power output in WT mice suggests that RLC phosphorylation is a major potentiating component required for achieving peak muscle performance during brief high frequency concentric contractions.
Resumo:
Research on transition-metal nanoalloy clusters composed of a few atoms is fascinating by their unusual properties due to the interplay among the structure, chemical order and magnetism. Such nanoalloy clusters, can be used to construct nanometer devices for technological applications by manipulating their remarkable magnetic, chemical and optical properties. Determining the nanoscopic features exhibited by the magnetic alloy clusters signifies the need for a systematic global and local exploration of their potential-energy surface in order to identify all the relevant energetically low-lying magnetic isomers. In this thesis the sampling of the potential-energy surface has been performed by employing the state-of-the-art spin-polarized density-functional theory in combination with graph theory and the basin-hopping global optimization techniques. This combination is vital for a quantitative analysis of the quantum mechanical energetics. The first approach, i.e., spin-polarized density-functional theory together with the graph theory method, is applied to study the Fe$_m$Rh$_n$ and Co$_m$Pd$_n$ clusters having $N = m+n \leq 8$ atoms. We carried out a thorough and systematic sampling of the potential-energy surface by taking into account all possible initial cluster topologies, all different distributions of the two kinds of atoms within the cluster, the entire concentration range between the pure limits, and different initial magnetic configurations such as ferro- and anti-ferromagnetic coupling. The remarkable magnetic properties shown by FeRh and CoPd nanoclusters are attributed to the extremely reduced coordination number together with the charge transfer from 3$d$ to 4$d$ elements. The second approach, i.e., spin-polarized density-functional theory together with the basin-hopping method is applied to study the small Fe$_6$, Fe$_3$Rh$_3$ and Rh$_6$ and the larger Fe$_{13}$, Fe$_6$Rh$_7$ and Rh$_{13}$ clusters as illustrative benchmark systems. This method is able to identify the true ground-state structures of Fe$_6$ and Fe$_3$Rh$_3$ which were not obtained by using the first approach. However, both approaches predict a similar cluster for the ground-state of Rh$_6$. Moreover, the computational time taken by this approach is found to be significantly lower than the first approach. The ground-state structure of Fe$_{13}$ cluster is found to be an icosahedral structure, whereas Rh$_{13}$ and Fe$_6$Rh$_7$ isomers relax into cage-like and layered-like structures, respectively. All the clusters display a remarkable variety of structural and magnetic behaviors. It is observed that the isomers having similar shape with small distortion with respect to each other can exhibit quite different magnetic moments. This has been interpreted as a probable artifact of spin-rotational symmetry breaking introduced by the spin-polarized GGA. The possibility of combining the spin-polarized density-functional theory with some other global optimization techniques such as minima-hopping method could be the next step in this direction. This combination is expected to be an ideal sampling approach having the advantage of avoiding efficiently the search over irrelevant regions of the potential energy surface.
Resumo:
Simulations of the global atmosphere for weather and climate forecasting require fast and accurate solutions and so operational models use high-order finite differences on regular structured grids. This precludes the use of local refinement; techniques allowing local refinement are either expensive (eg. high-order finite element techniques) or have reduced accuracy at changes in resolution (eg. unstructured finite-volume with linear differencing). We present solutions of the shallow-water equations for westerly flow over a mid-latitude mountain from a finite-volume model written using OpenFOAM. A second/third-order accurate differencing scheme is applied on arbitrarily unstructured meshes made up of various shapes and refinement patterns. The results are as accurate as equivalent resolution spectral methods. Using lower order differencing reduces accuracy at a refinement pattern which allows errors from refinement of the mountain to accumulate and reduces the global accuracy over a 15 day simulation. We have therefore introduced a scheme which fits a 2D cubic polynomial approximately on a stencil around each cell. Using this scheme means that refinement of the mountain improves the accuracy after a 15 day simulation. This is a more severe test of local mesh refinement for global simulations than has been presented but a realistic test if these techniques are to be used operationally. These efficient, high-order schemes may make it possible for local mesh refinement to be used by weather and climate forecast models.