902 resultados para Hemerythrin Model Complex
Resumo:
The three alpha2-adrenoceptor (alpha2-AR) subtypes belong to the G protein-coupled receptor superfamily and represent potential drug targets. These receptors have many vital physiological functions, but their actions are complex and often oppose each other. Current research is therefore driven towards discovering drugs that selectively interact with a specific subtype. Cell model systems can be used to evaluate a chemical compound's activity in complex biological systems. The aim of this thesis was to optimize and validate cell-based model systems and assays to investigate alpha2-ARs as drug targets. The use of immortalized cell lines as model systems is firmly established but poses several problems, since the protein of interest is expressed in a foreign environment, and thus essential components of receptor regulation or signaling cascades might be missing. Careful cell model validation is thus required; this was exemplified by three different approaches. In cells heterologously expressing alpha2A-ARs, it was noted that the transfection technique affected the test outcome; false negative adenylyl cyclase test results were produced unless a cell population expressing receptors in a homogenous fashion was used. Recombinant alpha2C-ARs in non-neuronal cells were retained inside the cells, and not expressed in the cell membrane, complicating investigation of this receptor subtype. Receptor expression enhancing proteins (REEPs) were found to be neuronalspecific adapter proteins that regulate the processing of the alpha2C-AR, resulting in an increased level of total receptor expression. Current trends call for the use of primary cells endogenously expressing the receptor of interest; therefore, primary human vascular smooth muscle cells (SMC) expressing alpha2-ARs were tested in a functional assay monitoring contractility with a myosin light chain phosphorylation assay. However, these cells were not compatible with this assay due to the loss of differentiation. A rat aortic SMC cell line transfected to express the human alpha2B-AR was adapted for the assay, and it was found that the alpha2-AR agonist, dexmedetomidine, evoked myosin light chain phosphorylation in this model.
Resumo:
The development of carbon capture and storage (CCS) has raised interest towards novel fluidised bed (FB) energy applications. In these applications, limestone can be utilized for S02 and/or CO2 capture. The conditions in the new applications differ from the traditional atmospheric and pressurised circulating fluidised bed (CFB) combustion conditions in which the limestone is successfully used for SO2 capture. In this work, a detailed physical single particle model with a description of the mass and energy transfer inside the particle for limestone was developed. The novelty of this model was to take into account the simultaneous reactions, changing conditions, and the effect of advection. Especially, the capability to study the cyclic behaviour of limestone on both sides of the calcination-carbonation equilibrium curve is important in the novel conditions. The significances of including advection or assuming diffusion control were studied in calcination. Especially, the effect of advection in calcination reaction in the novel combustion atmosphere was shown. The model was tested against experimental data; sulphur capture was studied in a laboratory reactor in different fluidised bed conditions. Different Conversion levels and sulphation patterns were examined in different atmospheres for one limestone type. The Conversion curves were well predicted with the model, and the mechanisms leading to the Conversion patterns were explained with the model simulations. In this work, it was also evaluated whether the transient environment has an effect on the limestone behaviour compared to the averaged conditions and in which conditions the effect is the largest. The difference between the averaged and transient conditions was notable only in the conditions which were close to the calcination-carbonation equilibrium curve. The results of this study suggest that the development of a simplified particle model requires a proper understanding of physical and chemical processes taking place in the particle during the reactions. The results of the study will be required when analysing complex limestone reaction phenomena or when developing the description of limestone behaviour in comprehensive 3D process models. In order to transfer the experimental observations to furnace conditions, the relevant mechanisms that take place need to be understood before the important ones can be selected for 3D process model. This study revealed the sulphur capture behaviour under transient oxy-fuel conditions, which is important when the oxy-fuel CFB process and process model are developed.
Resumo:
An augmented reality (AR) device must know observer’s location and orientation, i.e. observer’s pose, to be able to correctly register the virtual content to observer’s view. One possible way to determine and continuously follow-up the pose is model-based visual tracking. It supposes that a 3D model of the surroundings is known and that there is a video camera that is fixed to the device. The pose is tracked by comparing the video camera image to the model. Each new pose estimate is usually based on the previous estimate. However, the first estimate must be found out without a prior estimate, i.e. the tracking must be initialized, which in practice means that some model features must be identified from the image and matched to model features. This is known in literature as model-to-image registration problem or simultaneous pose and correspondence problem. This report reviews visual tracking initialization methods that are suitable for visual tracking in ship building environment when the ship CAD model is available. The environment is complex, which makes the initialization non-trivial. The report has been done as part of MARIN project.
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.
Resumo:
Despite extensive genetic and immunological research, the complex etiology and pathogenesis of type I diabetes remains unresolved. During the last few years, our attention has been focused on factors such as abnormalities of islet function and/or microenvironment, that could interact with immune partners in the spontaneous model of the disease, the non-obese diabetic (NOD) mouse. Intriguingly, the first anomalies that we noted in NOD mice, compared to control strains, are already present at birth and consist of 1) higher numbers of paradoxically hyperactive ß cells, assessed by in situ preproinsulin II expression; 2) high percentages of immature islets, representing islet neogenesis related to neonatal ß-cell hyperactivity and suggestive of in utero ß-cell stimulation; 3) elevated levels of some types of antigen-presenting cells and FasL+ cells, and 4) abnormalities of extracellular matrix (ECM) protein expression. However, the colocalization in all control mouse strains studied of fibroblast-like cells (anti-TR-7 labeling), some ECM proteins (particularly, fibronectin and collagen I), antigen-presenting cells and a few FasL+ cells at the periphery of islets undergoing neogenesis suggests that remodeling phenomena that normally take place during postnatal pancreas development could be disturbed in NOD mice. These data show that from birth onwards there is an intricate relationship between endocrine and immune events in the NOD mouse. They also suggest that tissue-specific autoimmune reactions could arise from developmental phenomena taking place during fetal life in which ECM-immune cell interaction(s) may play a key role.
Resumo:
Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.
Resumo:
We investigated the anti-inflammatory, antinociceptive and ulcerogenic activity of a zinc-diclofenac complex (5.5 or 11 mg/kg) in male Wistar rats (180-300 g, N = 6) and compared it to free diclofenac (5 or 10 mg/kg) and to the combination of diclofenac (5 or 10 mg/kg) and zinc acetate (1.68 or 3.5 mg/kg). The carrageenin-induced paw edema and the cotton pellet-induced granulomatous tissue formation models were used to assess the anti-inflammatory activity, and the Hargreaves model of thermal hyperalgesia was used to assess the antinociceptive activity. To investigate the effect of orally or intraperitoneally (ip) administered drugs on cold-induced gastric lesions, single doses were administered before exposing the animals to a freezer (-18ºC) for 45 min in individual cages. We also evaluated the gastric lesions induced by multiple doses of the drugs. Diclofenac plus zinc complex had the same anti-inflammatory and antinociceptive effects as diclofenac alone. Gastric lesions induced by a single dose administered per os and ip were reduced in the group treated with zinc-diclofenac when compared to the groups treated with free diclofenac or diclofenac plus zinc acetate. In the multiple dose treatment, the complex induced a lower number of the most severe lesions when compared to free diclofenac and diclofenac plus zinc acetate. In conclusion, the present study demonstrates that the zinc-diclofenac complex may represent an important therapeutic alternative for the treatment of rheumatic and inflammatory conditions, as its use may be associated with a reduced incidence of gastric lesions.
Resumo:
We describe the behavior of the snail Megalobulimus abbreviatus upon receiving thermal stimuli and the effects of pretreatment with morphine and naloxone on behavior after a thermal stimulus, in order to establish a useful model for nociceptive experiments. Snails submitted to non-functional (22ºC) and non-thermal hot-plate stress (30ºC) only displayed exploratory behavior. However, the animals submitted to a thermal stimulus (50ºC) displayed biphasic avoidance behavior. Latency was measured from the time the animal was placed on the hot plate to the time when the animal lifted the head-foot complex 1 cm from the substrate, indicating aversive thermal behavior. Other animals were pretreated with morphine (5, 10, 20 mg/kg) or naloxone (2.5, 5.0, 7.5 mg/kg) 15 min prior to receiving a thermal stimulus (50ºC; N = 9 in each group). The results (means ± SD) showed an extremely significant difference in response latency between the group treated with 20 mg/kg morphine (63.18 ± 14.47 s) and the other experimental groups (P < 0.001). With 2.5 mg/kg (16.26 ± 3.19 s), 5.0 mg/kg (11.53 ± 1.64 s) and 7.5 mg/kg naloxone (7.38 ± 1.6 s), there was a significant, not dose-dependent decrease in latency compared to the control (33.44 ± 8.53 s) and saline groups (29.1 ± 9.91 s). No statistically significant difference was found between the naloxone-treated groups. With naloxone plus morphine, there was a significant decrease in latency when compared to all other groups (minimum 64% in the saline group and maximum 83.2% decrease in the morphine group). These results provide evidence of the involvement of endogenous opioid peptides in the control of thermal withdrawal behavior in this snail, and reveal a stereotyped and reproducible avoidance behavior for this snail species, which could be studied in other pharmacological and neurophysiological studies.
Resumo:
Renal ischemia-reperfusion (IR) injury is the major cause of acute renal failure in native and transplanted kidneys. Mononuclear leukocytes have been reported in renal tissue as part of the innate and adaptive responses triggered by IR. We investigated the participation of CD4+ T lymphocytes in the pathogenesis of renal IR injury. Male mice (C57BL/6, 8 to 12 weeks old) were submitted to 45 min of ischemia by renal pedicle clamping followed by reperfusion. We evaluated the role of CD4+ T cells using a monoclonal depleting antibody against CD4 (GK1.5, 50 µ, ip), and class II-major histocompatibility complex molecule knockout mice. Both CD4-depleted groups showed a marked improvement in renal function compared to the ischemic group, despite the fact that GK1.5 mAb treatment promoted a profound CD4 depletion (to less than 5% compared to normal controls) only within the first 24 h after IR. CD4-depleted groups presented a significant improvement in 5-day survival (84 vs 80 vs 39%; antibody treated, knockout mice and non-depleted groups, respectively) and also a significant reduction in the tubular necrosis area with an early tubular regeneration pattern. The peak of CD4-positive cell infiltration occurred on day 2, coinciding with the high expression of ßC mRNA and increased urea levels. CD4 depletion did not alter the CD11b infiltrate or the IFN-g and granzyme-B mRNA expression in renal tissue. These data indicate that a CD4+ subset of T lymphocytes may be implicated as key mediators of very early inflammatory responses after renal IR injury and that targeting CD4+ T lymphocytes may yield novel therapies.
Resumo:
SRY-related high-mobility-group box 9 (Sox9) gene is a cartilage-specific transcription factor that plays essential roles in chondrocyte differentiation and cartilage formation. The aim of this study was to investigate the feasibility of genetic delivery of Sox9 to enhance chondrogenic differentiation of human umbilical cord blood-derived mesenchymal stem cells (hUC-MSCs). After they were isolated from human umbilical cord blood within 24 h after delivery of neonates, hUC-MSCs were untreated or transfected with a human Sox9-expressing plasmid or an empty vector. The cells were assessed for morphology and chondrogenic differentiation. The isolated cells with a fibroblast-like morphology in monolayer culture were positive for the MSC markers CD44, CD105, CD73, and CD90, but negative for the differentiation markers CD34, CD45, CD19, CD14, or major histocompatibility complex class II. Sox9 overexpression induced accumulation of sulfated proteoglycans, without altering the cellular morphology. Immunocytochemistry demonstrated that genetic delivery of Sox9 markedly enhanced the expression of aggrecan and type II collagen in hUC-MSCs compared with empty vector-transfected counterparts. Reverse transcription-polymerase chain reaction analysis further confirmed the elevation of aggrecan and type II collagen at the mRNA level in Sox9-transfected cells. Taken together, short-term Sox9 overexpression facilitates chondrogenesis of hUC-MSCs and may thus have potential implications in cartilage tissue engineering.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Resumo:
As increasing efficiency of a wind turbine gearbox, more power can be transferred from rotor blades to generator and less power is used to cause wear and heating in the gearbox. By using a simulation model, behavior of the gearbox can be studied before creating expensive prototypes. The objective of the thesis is to model a wind turbine gearbox and its lubrication system to study power losses and heat transfer inside the gearbox and to study the simulation methods of the used software. Software used to create the simulation model is Siemens LMS Imagine.Lab AMESim, which can be used to create one-dimensional mechatronic system simulation models from different fields of engineering. When combining components from different libraries it is possible to create a simulation model, which includes mechanical, thermal and hydraulic models of the gearbox. Results for mechanical, thermal, and hydraulic simulations are presented in the thesis. Due to the large scale of the wind turbine gearbox and the amount of power transmitted, power loss calculations from AMESim software are inaccurate and power losses are modelled as constant efficiency for each gear mesh. Starting values for simulation in thermal and hydraulic simulations were chosen from test measurements and from empirical study as compact and complex design of gearbox prevents accurate test measurements. In further studies to increase the accuracy of the simulation model, components used for power loss calculations needs to be modified and values for unknown variables are needed to be solved through accurate test measurements.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
Dr. Gibson and others look at a model of the Brock Campus that includes the proposed Academic Staging Building (Mackenzie Chown Complex).