996 resultados para JD-R Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past 10 years, mini-host models and in particular the greater wax moth Galleria mellonella have tended to become a surrogate for murine models of fungal infection mainly due to cost, ethical constraints and ease of use. Thus, methods to better assess the fungal pathogenesis in G. mellonella need to be developed. In this study, we implemented the detection of Candida albicans cells expressing the Gaussia princeps luciferase in its cell wall in infected larvae of G. mellonella. We demonstrated that detection and quantification of luminescence in the pulp of infected larvae is a reliable method to perform drug efficacy and C. albicans virulence assays as compared to fungal burden assay. Since the linearity of the bioluminescent signal, as compared to the CFU counts, has a correlation of R(2) = 0.62 and that this method is twice faster and less labor intensive than classical fungal burden assays, it could be applied to large scale studies. We next visualized and followed C. albicans infection in living G. mellonella larvae using a non-toxic and water-soluble coelenterazine formulation and a CCD camera that is commonly used for chemoluminescence signal detection. This work allowed us to follow for the first time C. albicans course of infection in G. mellonella during 4 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate what processes may underlie heterogeneity in social preferences. We address this question by examining participants' decisions and associated response times across 12 mini-ultimatum games. Using a finite mixture model and cross-validating its classification with a response time analysis, we identified four groups of responders: one group takes little to no account of the proposed split or the foregone allocation and swiftly accepts any positive offer; two groups process primarily the objective properties of the allocations (fairness and kindness) and need more time the more properties need to be examined; and a fourth group, which takes more time than the others, appears to take into account what they would have proposed had they been put in the role of the proposer. We discuss implications of this joint decision-response time analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seventy-five percent of breast cancers are estrogen receptor α positive (ER(+)). Research on these tumors is hampered by lack of adequate in vivo models; cell line xenografts require non-physiological hormone supplements, and patient-derived xenografts (PDXs) are hard to establish. We show that the traditional grafting of ER(+) tumor cells into mammary fat pads induces TGFβ/SLUG signaling and basal differentiation when they require low SLUG levels to grow in vivo. Grafting into the milk ducts suppresses SLUG; ER(+) tumor cells develop, like their clinical counterparts, in the presence of physiological hormone levels. Intraductal ER(+) PDXs are retransplantable, predictive, and appear genomically stable. The model provides opportunities for translational research and the study of physiologically relevant hormone action in breast carcinogenesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to analyse cooperation in R&D in the automobile industry in Spain. It first examines to what extent firms cooperate with external actors in the field of technological innovation, and if so, with what type of cooperation partner, paying special attention to the differentiation according to the size of the firms. Second, it aims to study how the firm’s size may affect not only the decision of cooperating but also with which type of partner, while controlling for other determinants that have been considered in the literature as main drivers of collaborative activities in R&D. We use data provided by the Technological Innovation Panel in the 2006-2008 period for firms in the automotive sector. We estimate a bivariate probit model that takes into account the two types of cooperation mostly present in the automotive industry, vertical and institutional, explicitly considering the interdependencies that may arise in the simultaneous choice of both.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The immune system is involved in the development of neuropathic pain. In particular, the infiltration of T-lymphocytes into the spinal cord following peripheral nerve injury has been described as a contributor to sensory hypersensitivity. We used the spared nerve injury (SNI) model of neuropathic pain in Sprague Dawley adult male rats to assess proliferation, and/or protein/gene expression levels for microglia (Iba1), T-lymphocytes (CD2) and cytotoxic T-lymphocytes (CD8). In the dorsal horn ipsilateral to SNI, Iba1 and BrdU stainings revealed microglial reactivity and proliferation, respectively, with different durations. Iba1 expression peaked at D4 and D7 at the mRNA and protein level, respectively, and was long-lasting. Proliferation occurred almost exclusively in Iba1 positive cells and peaked at D2. Gene expression observation by RT-qPCR array suggested that T-lymphocytes attracting chemokines were upregulated after SNI in rat spinal cord but only a few CD2/CD8 positive cells were found. A pronounced infiltration of CD2/CD8 positive T-cells was seen in the spinal cord injury (SCI) model used as a positive control for lymphocyte infiltration. Under these experimental conditions, we show early and long-lasting microglia reactivity in the spinal cord after SNI, but no lymphocyte infiltration was found.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to analyse cooperation in R&D in the automobile industry in Spain. It first examines to what extent firms cooperate with external actors in the field of technological innovation, and if so, with what type of cooperation partner, paying special attention to the differentiation according to the size of the firms. Second, it aims to study how the firm’s size may affect not only the decision of cooperating but also with which type of partner, while controlling for other determinants that have been considered in the literature as main drivers of collaborative activities in R&D. We use data provided by the Technological Innovation Panel in the 2006-2008 period for firms in the automotive sector. We estimate a bivariate probit model that takes into account the two types of cooperation mostly present in the automotive industry, vertical and institutional, explicitly considering the interdependencies that may arise in the simultaneous choice of both.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983) 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997) 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985) 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r)) and g(O/H(r)) do not depend on them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stora Enso has research centres in Finland, Sweden and Germany. The research centres use PIMS as a research project invoicing and monitoring system. Possible reorganization and new financing model for R&D functions have been considered and the project management system should be developed to support the new operation model. The objective of this thesis is to find a model for R&D project management, and to present the project management system of Stora Enso, and to discuss if the current system could be developed to respond to the new needs or should it be replaced with another system. The theoretical part of the study describes challenges in R&D project management, and presents different project characteristics, and methods for managing R&D project portfolio. It is also described how the project management system can support project monitoring and controlling, and how inter-project learning can be enhanced. The empirical part of the study presents the current project management system of Stora Enso and how the system should be developed to support Stora Enso’s R&D functions better. In conclusion, it is stated that there is no relevant reason to replace PIMS with another system, because PIMS can be developed to support R&D project management better with a hybrid system. It is also suggested that the new financing model should not be implemented before more comprehensive analysis of its effects is conducted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article describes some concrete problems that were encountered when writing a two-level model of Mari morphology. Mari is an agglutinative Finno-Ugric language spoken in Russia by about 600 000 people. The work was begun in the 1980s on the basis of K. Koskenniemi’s Two-Level Morphology (1983), but in the latest stage R. Beesley’s and L. Karttunen’s Finite State Morphology (2003) was used. Many of the problems described in the article concern the inexplicitness of the rules in Mari grammars and the lack of information about the exact distribution of some suffixes, e.g. enclitics. The Mari grammars usually give complete paradigms for a few unproblematic verb stems, whereas the difficult or unclear forms of certain verbs are only superficially discussed. Another example of phenomena that are poorly described in grammars is the way suffixes with an initial sibilant combine to stems ending in a sibilant. The help of informants and searches from electronic corpora were used to overcome such difficulties in the development of the two-level model of Mari. The variation of the order of plural markers, case suffixes and possessive suffixes is a typical feature of Mari. The morphotactic rules constructed for Mari declensional forms tend to be recursive and their productivity must be limited by some technical device, such as filters. In the present model, certain plural markers were treated like nouns. The positional and functional versatility of the possessive suffixes can be regarded as the most challenging phenomenon in attempts to formalize the Mari morphology. Cyrillic orthography, which was used in the model, also caused problems. For instance, a Cyrillic letter may represent a sequence of two sounds, the first being part of the word stem while the other belongs to a suffix. In some cases, letters for voiced consonants are also generalized to represent voiceless consonants. Such orthographical conventions distance a morphological model based on orthography from the actual (morpho)phonological processes in the language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topological solitons of two classical field theories, the Faddeev-Skyrme model and the Ginzburg-Landau model are studied numerically and analytically in this work. The aim is to gain information on the existence and properties of these topological solitons, their structure and behaviour under relaxation. First, the conditions and mechanisms leading to the possibility of topological solitons are explored from the field theoretical point of view. This leads one to consider continuous deformations of the solutions of the equations of motion. The results of algebraic topology necessary for the systematic treatment of such deformations are reviewed and methods of determining the homotopy classes of topological solitons are presented. The Faddeev-Skyrme and Ginzburg-Landau models are presented, some earlier results reviewed and the numerical methods used in this work are described. The topological solitons of the Faddeev-Skyrme model, Hopfions, are found to follow the same mechanisms of relaxation in three different domains with three different topological classifications. For two of the domains, the necessary but unusual topological classification is presented. Finite size topological solitons are not found in the Ginzburg-Landau model and a scaling argument is used to suggest that there are indeed none unless a certain modification to the model, due to R. S. Ward, is made. In that case, the Hopfions of the Faddeev-Skyrme model are seen to be present for some parameter values. A boundary in the parameter space separating the region where the Hopfions exist and the area where they do not exist is found and the behaviour of the Hopfion energy on this boundary is studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe the synthesis of 2´,4´-dimethoxy-8-(propyl-2-one)-deoxybenzoin, a new compound employed as a model for the comparison with the respective spectral data for 6',4-dihydroxy-3'-(3,3- dimethylallyl)-2",2"-dimethylchromene(5",6":5',4')-2'-methoxy-8-(propyl-2-one) deoxybenzoin, recently isolated from Deguelia hatschbachii A.M.G. Azevedo. Both compounds have a "propyl-2-one" group attached to C-8 of the deoxybenzoin skeleton, for which there is no precedent in the literature. The Friedel-Crafts reaction of 1,3-dimethoxybenzene with phenylacetyl chloride furnished 2´,4´-dimethoxydeoxybenzoin, that after reaction with allyl bromide gave 2´,4´-dimethoxy-8-(allyl)-deoxybenzoin . Wacker oxidation gave the desired model compound in 15% overall yield. The corresponding spectral data reinforced the structure previously determined for the natural product.