893 resultados para Development models
Resumo:
Radiolabelled somatostatin-based antagonists show a higher uptake in tumour-bearing mouse models than agonists of similar or even distinctly higher receptor affinity. Very similar results were obtained with another family of G protein-coupled receptor ligands, the bombesin family. We describe a new conjugate, RM2, with the chelator DOTA coupled to D-Phe-Gln-Trp-Ala-Val-Gly-His-Sta-Leu-NH(2) via the cationic spacer 4-amino-1-carboxymethyl-piperidine for labelling with radiometals such as (111)In and (68)Ga.
Resumo:
Transgenic mouse models of human cancers represent one of the most promising approaches to elucidate clinically relevant mechanisms of action and provide insights into the treatment efficacy of new antitumor drugs. The use of Trp53 transgenic mice (Trp53 knockout [Trp53(-/-)] mice) for these kinds of studies is, so far, restricted by limitations in detecting developing tumors and the lack of noninvasive tools for monitoring tumor growth, progression, and treatment response.
Resumo:
Lymphocytic choriomeningitis virus (LCMV) exhibits natural tropism for dendritic cells and represents the prototypic infection that elicits protective CD8(+) T cell (cytotoxic T lymphocyte (CTL)) immunity. Here we have harnessed the immunobiology of this arenavirus for vaccine delivery. By using producer cells constitutively synthesizing the viral glycoprotein (GP), it was possible to replace the gene encoding LCMV GP with vaccine antigens to create replication-defective vaccine vectors. These rLCMV vaccines elicited CTL responses that were equivalent to or greater than those elicited by recombinant adenovirus 5 or recombinant vaccinia virus in their magnitude and cytokine profiles, and they exhibited more effective protection in several models. In contrast to recombinant adenovirus 5, rLCMV failed to elicit vector-specific antibody immunity, which facilitated re-administration of the same vector for booster vaccination. In addition, rLCMV elicited T helper type 1 CD4+ T cell responses and protective neutralizing antibodies to vaccine antigens. These features, together with low seroprevalence in humans, suggest that rLCMV may show utility as a vaccine platform against infectious diseases and cancer.
Resumo:
This paper summarises the discussions which took place at the Workshop on Methodology in Erosion Research in Zürich, 2010, and aims, where possible, to offer guidance for the development and application of both in vitro and in situ models for erosion research. The prospects for clinical trials are also discussed. All models in erosion research require a number of choices regarding experimental conditions, study design and measurement techniques, and these general aspects are discussed first. Among in vitro models, simple (single- or multiple-exposure) models can be used for screening products regarding their erosive potential, while more elaborate pH cycling models can be used to simulate erosion in vivo. However, in vitro models provide limited information on intra-oral erosion. In situ models allow the effect of an erosive challenge to be evaluated under intra-oral conditions and are currently the method of choice for short-term testing of low-erosive products or preventive therapeutic products. In the future, clinical trials will allow longer-term testing. Possible methodologies for such trials are discussed.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
Introduction: Small animal models are widely used in basic research. However, experimental systems requiring extracorporeal circuits are frequently confronted with limitations related to equipment size. This is particularly true for oxygenators in systems with limited volumes. Thus we aimed to develop and validate an ultra mini-oxygenator for low-volume, buffer-perfused systems. Methods: We have manufactured a series of ultra mini-oxygenators with approximately 175 aligned, microporous, polypropylene hollow fibers contained inside a shell, which is sealed at each of the two extremities to isolate perfusate and gas compartments. With this construction, gas passes through hollow fibers, while perfusate circulates around fibers. Performance of ultra mini-oxygenators (oxygen partial pressure (PO2 ), gas and perfusate flow, perfusate pressure and temperature drop) were assessed with modified Krebs-Henseleit buffer in an in vitro perfusion circuit and an ex vivo rat heart preparation. Results: Mean priming volume of ultra mini-oxygenators was 1.2±0.5 mL and, on average, 86±6% of fibers were open (n=17). In vitro, effective oxygenation (PO2=400-500 mmHg) was achieved for all flow rates up to 50 mL/min and remained stable for at least 2 hours (n=5). Oxygenation was also effective and stable (PO2=456±40 mmHg) in the isolated heart preparation for at least 60 minutes ("venous" PO2=151±11 mmHg; n=5). Conclusions: We have established a reproducible procedure for fabrication of ultra mini-oxygenators, which provide reliable and stable oxygenation for at least 60-120 min. These oxygenators are especially attractive for pre-clinical protocols using small, rather than large, animals.
Resumo:
This study investigates the effect of cell phones on economic development and growth by performing an econometric analysis using data from the International Telecommunications Union and the Penn World Table. It discusses the various ways cell phones can make markets more efficient and how the diffusion of information andknowledge plays into development. Several approaches (OLS, Fixed Effects, 2SLS) were used to test over 20 econometric models. Overall, the mobile cellular subscriptions rate was found to have a positive and significant impact on countries’ level of real per capitaGDP and GDP growth rate. Furthermore, the study provides policy implications for the use of technology to promote global growth.
Resumo:
This research tests the hypothesis that knowledge of derivational morphology facilitates vocabulary acquisition in beginning adult second language learners. Participants were mono-lingual English-speaking college students aged 18 years and older enrolled inintroductory Spanish courses. Knowledge of Spanish derivational morphology was tested through the use of a forced-choice translation task. Spanish lexical knowledge was measured by a translation task using direct translation (English word) primes and conceptual (picture) primes. A 2x2x2 mixed factor ANOVA examined the relationships between morphological knowledge (strong, moderate), error type (form-based, conceptual), and prime type (direct translation, picture). The results are consistent with the existence of a relationship between knowledge of derivational morphology andacquisition of second language vocabulary. Participants made more conceptually-based errors than form-based errors F (1,22)=7.744, p=.011. This result is consistent with Clahsen & Felser’s (2006) and Ullman’s (2004) models of second language processing. Additionally, participants with Strong morphological knowledge made fewer errors onthe lexical knowledge task than participants with Moderate morphological knowledge t(23)=-2.656, p=.014. I suggest future directions to clarify the relationship between morphological knowledge and lexical development in adult second language learners.
Resumo:
Development of novel implants in orthopaedic trauma surgery is based on limited datasets of cadaver trials or artificial bone models. A method has been developed whereby implants can be constructed in an evidence based method founded on a large anatomic database consisting of more than 2.000 datasets of bones extracted from CT scans. The aim of this study was the development and clinical application of an anatomically pre-contoured plate for the treatment of distal fibular fractures based on the anatomical database. 48 Caucasian and Asian bone models (left and right) from the database were used for the preliminary optimization process and validation of the fibula plate. The implant was constructed to fit bilaterally in a lateral position of the fibula. Then a biomechanical comparison of the designed implant to the current gold standard in the treatment of distal fibular fractures (locking 1/3 tubular plate) was conducted. Finally, a clinical surveillance study to evaluate the grade of implant fit achieved was performed. The results showed that with a virtual anatomic database it was possible to design a fibula plate with an optimized fit for a large proportion of the population. Biomechanical testing showed the novel fibula plate to be superior to 1/3 tubular plates in 4-point bending tests. The clinical application showed a very high degree of primary implant fit. Only in a small minority of cases further intra-operative implant bending was necessary. Therefore, the goal to develop an implant for the treatment of distal fibular fractures based on the evidence of a large anatomical database could be attained. Biomechanical testing showed good results regarding the stability and the clinical application confirmed the high grade of anatomical fit.
Resumo:
Over the last decades, considerable efforts have been undertaken in the development of animal models mimicking the pathogenesis of allergic diseases occurring in humans. The mouse has rapidly emerged as the animal model of choice, due to considerations of handling and costs and, importantly, due to the availability of a large and increasing arsenal of genetically modified mouse strains and molecular tools facilitating the analysis of complex disease models. Here, we review latest developments in allergy research that have arisen from in vivo experimentation in the mouse, with a focus on models of food allergy and allergic asthma, which constitute major health problems with increasing incidence in industrialized countries. We highlight recent novel findings and controversies in the field, most of which were obtained through the use of gene-deficient or germ-free mice, and discuss new potential therapeutic approaches that have emerged from animal studies and that aim at attenuating allergic reactions in human patients.
Resumo:
Among the cestodes, Echinococcus granulosus, Echinococcus multilocularis and Taenia solium represent the most dangerous parasites. Their larval stages cause the diseases cystic echinococcosis (CE), alveolar echinococcosis (AE) and cysticercosis, respectively, which exhibit considerable medical and veterinary health concerns with a profound economic impact. Others caused by other cestodes, such as species of the genera Mesocestoides and Hymenolepis, are relatively rare in humans. In this review, we will focus on E. granulosus and E. multilocularis metacestode laboratory models and will review the use of these models in the search for novel drugs that could be employed for chemotherapeutic treatment of echinococcosis. Clearly, improved therapeutic drugs are needed for the treatment of AE and CE, and this can only be achieved through the development of medium-to-high throughput screening approaches. The most recent achievements in the in vitro culture and genetic manipulation of E. multilocularis cells and metacestodes, and the accessability of the E. multilocularis genome and EST sequence information, have rendered the E. multilocularis model uniquely suited for studies on drug-efficacy and drug target identification. This could lead to the development of novel compounds for the use in chemotherapy against echinococcosis, and possibly against diseases caused by other cestodes, and potentially also trematodes.
Resumo:
With the advent of cheaper and faster DNA sequencing technologies, assembly methods have greatly changed. Instead of outputting reads that are thousands of base pairs long, new sequencers parallelize the task by producing read lengths between 35 and 400 base pairs. Reconstructing an organism’s genome from these millions of reads is a computationally expensive task. Our algorithm solves this problem by organizing and indexing the reads using n-grams, which are short, fixed-length DNA sequences of length n. These n-grams are used to efficiently locate putative read joins, thereby eliminating the need to perform an exhaustive search over all possible read pairs. Our goal was develop a novel n-gram method for the assembly of genomes from next-generation sequencers. Specifically, a probabilistic, iterative approach was utilized to determine the most likely reads to join through development of a new metric that models the probability of any two arbitrary reads being joined together. Tests were run using simulated short read data based on randomly created genomes ranging in lengths from 10,000 to 100,000 nucleotides with 16 to 20x coverage. We were able to successfully re-assemble entire genomes up to 100,000 nucleotides in length.
Resumo:
Success in any field depends on a complex interplay among environmental and personal factors. A key set of personal factors for success in academic settings are those associated with self-regulated learners (SRL). Self-regulated learners choose their own goals, select and organize their learning strategies, and self-monitor their effectiveness. Behaviors and attitudes consistent with self-regulated learning also contribute to self-confidence, which may be important for members of underrepresented groups such as women in engineering. This exploratory study, drawing on the concept of "critical mass", examines the relationship between the personal factors that identify a self-regulated learner and the environmental factors related to gender composition of engineering classrooms. Results indicate that a relatively student gender-balanced classroom and gender match between students and their instructors provide for the development of many adaptive SRL behaviors and attitudes.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.