979 resultados para model base


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mouse mammary tumor virus is known to infect newborn mice via mother's milk. A proposed key step for viral spread to the mammary gland is by the infection of lymphocytes. We show here that although in suckling mice retroviral proteins are found in all epithelial cells of the gut, viral DNA is exclusively detectable in the Peyer's patches. As early as 5 d after birth the infection leads to a superantigen response in the Peyer's patches but not in other lymphoid organs draining the intestine. Viral DNA can be detected before the superantigen response and becomes first evident in the Peyer's patches followed by mesenteric lymph nodes and finally all lymphoid organs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinicians increasingly agree that it is important to assess patients' spirituality and to incorporate this dimension into the care of elderly persons, in order to enhance patient-centered care. However, models of integrative care that take into account the spiritual dimension of the patient are needed in order to promote a holistic approach to care. This research defines a concept of spirituality in the hospitalized elderly person and develops a model on which to base spirituality assessment in the hospital setting. The article presents in detail the different stages in the conceptualization of The Spiritual Needs Model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El turisme és un sector de sectors amb els que el turista interacciona directament com a un ciutadà més i és aquesta complexa cadena de valor el que acaba conformant la experiència turística. La visió 2020 i els objectius estratègics 2016 conformen la base sobre la que s’han proposat les directrius nacionals 2020 i el pla d’accions 2013-2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La realització d’aquest projecte pretén crear una metodologia de treball didàctica per tal de que els alumnes que estiguin cursant els estudis d’enginyeria tècnica agrícola i enginyeria agrònoma tinguin el material docent necessari per realitzar un anàlisis crític i resolució de casos pràctics. L’objectiu principal d’aquest projecte és l’estructuració i anàlisi de dades sobre la gestió tècnica i econòmica de les explotacions porcines aplicades a l’estudi de casos per crear un model d’aprenentatge didàctic. S’elaboraràn els casos en format virtual (web) per tal de que l’alumne pugui tenir un espai d’autoformació, i disposi de tot el material necessari i complementari per arribar a realitzar un anàlisi de la gestió tècnica i econòmica d’una explotació porcina.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a model for managing the product data in a product transfer project was created for ABB Machines. This model was then applied for the ongoing product transfer project during its planning phase. Detailed information about the demands and challenges in product transfer projects was acquired by analyzing previous product transfer projects in participating organizations. This analysis and the ABB Gate Model were then used as a base for the creation of the model for managing the product data in a product transfer project. The created model shows the main tasks during each phase in the project, their sub-tasks and relatedness on general level. Furthermore the model emphasizes need for detailed analysis of the situation during the project planning phase. The created model for managing the product data in a product transfer project was applied into ongoing project two main areas; manufacturing instructions and production item data. The results showed that the greatest challenge considering the product transfer project in previously mentioned areas is the current state of the product data. Based on the findings, process and resource proposals for both the ongoing product transfer project and the BU Machines were given. For manufacturing instructions it is necessary to create detailed process instructions in receiving organizations own language for each department so that the manufacturing instructions can be used as a training material during the training in sending organization. For production item data the English version of the bill of materials needs to be fully in English. In addition it needs to be ensured that bill of materials is updated and these changes implemented before the training in sending organization begins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As increasingly large molecular data sets are collected for phylogenomics, the conflicting phylogenetic signal among gene trees poses challenges to resolve some difficult nodes of the Tree of Life. Among these nodes, the phylogenetic position of the honey bees (Apini) within the corbiculate bee group remains controversial, despite its considerable importance for understanding the emergence and maintenance of eusociality. Here, we show that this controversy stems in part from pervasive phylogenetic conflicts among GC-rich gene trees. GC-rich genes typically have a high nucleotidic heterogeneity among species, which can induce topological conflicts among gene trees. When retaining only the most GC-homogeneous genes or using a nonhomogeneous model of sequence evolution, our analyses reveal a monophyletic group of the three lineages with a eusocial lifestyle (honey bees, bumble bees, and stingless bees). These phylogenetic relationships strongly suggest a single origin of eusociality in the corbiculate bees, with no reversal to solitary living in this group. To accurately reconstruct other important evolutionary steps across the Tree of Life, we suggest removing GC-rich and GC-heterogeneous genes from large phylogenomic data sets. Interpreted as a consequence of genome-wide variations in recombination rates, this GC effect can affect all taxa featuring GC-biased gene conversion, which is common in eukaryotes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acid base properties of mixed species of the microalgae Spirulina were studied by potentiometric titration in medium of 0.01 and 0.10 mols L-1 NaNO3 at 25.0±0.10 C using modified Gran functions or nonlinear regression techniques for data fitting. The discrete site distribution model was used, permitting the characterization of five classes of ionizable sites in both ionic media. This fact suggests that the chemical heterogeneity of the ionizable sites on the cell surface plays a major role on the acid-base properties of the suspension in comparison to electrostatic effects due to charge-charge interactions. The total of ionizable sites were 1.75±0.10 and 1.86±0.20 mmolsg-1 in ionic media of 0.01 and 0.10 mols L-1 NaNO3, respectively. A major contribution of carboxylic groups was observed with an average 34 and 22% of ionizable sites being titrated with conditional pcKa of 4.0 and 5.4, respectively. The remaining 44% of ionizable sites were divided in three classes with averaged conditional pcKa of 6.9, 8.7 and 10.12, which may be assigned respectively to imidazolic, aminic, and phenolic functionalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multicomponent ( Al2O3, CaO, SiO2, MgO) calcium aluminate-based glasses containing Nd3+ were prepared in order to evaluate their possibilities as laser host materials. The refractive index, UV-visible-near IR absorption spectrum, IR and visible luminescence spectra, and fluorescence decay time were measured. Judd-Ofelt model was used to obtain experimental intensity parameters ( omega2, omega4 and omega6), emission cross-section, radiative lifetimes, emission branching ratios and quantum efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to determine the safe shelf life of single-base propellants. The kinetic parameters relative to the consumption of the stabilizer diphenylamine (DPA) added to the propellant were determined as a function of the storage and ageing time. High Performance Liquid Chromatography (HPLC) with spectrophotometric detection was used to determine the DPA percentage before and after the artificial ageing at 60, 70 and 80 ºC. The experimental data were very well adjusted to a pseudo-first order kinetic model and the respective kinetic constants are 8.0-10-3 day-1 (60 ºC); 1.9-10-2 day-1 (70 ºC); 1.2-10-1 day-1 (80 ºC). The activation energy was calculated as 130 kJ mol-1 and the half-time for depletion of the DPA at the hypothetical temperature of 40 ºC of storage was estimated as being 6 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mental models play an important role in the evolution of an individual's so-called knowledge. Using such representations, students can explain, foresee, and attribute causality to observed phenomena. In the case of Chemistry, the ability to work mentally with models assumes great importance, due to the microscopic component that is characteristic of this science. With the objective of exploring students' ability to work with models, 27 students of the Chemistry Institute of UNESP were asked to describe the mechanisms of dissolution, in water, of NaCl, HCl and HCN, as well as the partial dissolution of I2. Due to difficulties of access to complex descriptors of these processes, each student was asked to explain the phenomena using words and drawings. The results of these investigations were analyzed, and enabled construction of a framework representing the Chemistry students' theoretical training, especially with respect to their most important transferred skill: an ability to model the physical world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to develop co-operation between business units of the company operating in graphic industry. The development was done by searching synergy opportunities between these business units. The final aim was to form a business model, which is based on co-operation of these business units.The literature review of this thesis examines synergies and especially the process concerning the search and implementation of synergies. Also the concept of business model and its components are examined. The research was done by using qualitative research method. The main data acquiring method to the empirical part was theme interviews. The data was analyzed using thematisation and content analysis.The results of the study include seven identified possible synergies and a business model, which is based on the co-operation of the business units. The synergy opportunities are evaluated and the implementation order of the synergies is suggested. The presented synergies create the base for the proposed business model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a computer program to model and support product design is presented. The product is represented through a hierarchical structure that allows the user to navigate across the product’s components, and it aims at facilitating each step of the detail design process. A graphical interface was also developed, which shows visually to the user the contents of the product structure. Features are used as building blocks for the parts that compose the product, and object-oriented methodology was used as a means to implement the product structure. Finally, an expert system was also implemented, whose knowledge base rules help the user design a product that meets design and manufacturing requirements.