975 resultados para Test systems
Resumo:
BACKGROUND: In contrast to RIA, recently available ELISAs provide the potential for fully automated analysis of adiponectin. To date, studies reporting on the diagnostic characteristics of ELISAs and investigating on the relationship between ELISA- and RIA-based methods are rare. METHODS: Thus, we established and evaluated a fully automated platform (BEP 2000; Dade-Behring, Switzerland) for determination of adiponectin levels in serum by two different ELISA methods (competitive human adiponectin ELISA; high sensitivity human adiponectin sandwich ELISA; both Biovendor, Czech Republic). Further, as a reference method, we also employed a human adiponectin RIA (Linco Research, USA). Samples from 150 patients routinely presenting to our cardiology unit were tested. RESULTS: ELISA measurements could be accomplished in less than 3 h, measurement of RIA had a duration of 24 h. The ELISAs were evaluated for precision, analytical sensitivity and specificity, linearity on dilution and spiking recovery. In the investigated patients, type 2 diabetes, higher age and male gender were significantly associated with lower serum adiponectin concentrations. Correlations between the ELISA methods and the RIA were strong (competitive ELISA, r=0.82; sandwich ELISA, r=0.92; both p<0.001). However, Deming regression and Bland-Altman analysis indicated lack of agreement of the 3 methods preventing direct comparison of results. The equations of the regression lines are: Competitive ELISA=1.48 x RIA-0.88; High sensitivity sandwich ELISA=0.77 x RIA+1.01. CONCLUSIONS: Fully automated measurement of adiponectin by ELISA is feasible and substantially more rapid than RIA. The investigated ELISA test systems seem to exhibit analytical characteristics allowing for clinical application. In addition, there is a strong correlation between the ELISA methods and RIA. These findings might promote a more widespread use of adiponectin measurements in clinical research.
Resumo:
In rapidly evolving domains such as Computer Assisted Orthopaedic Surgery (CAOS) emphasis is often put first on innovation and new functionality, rather than in developing the common infrastructure needed to support integration and reuse of these innovations. In fact, developing such an infrastructure is often considered to be a high-risk venture given the volatility of such a domain. We present CompAS, a method that exploits the very evolution of innovations in the domain to carry out the necessary quantitative and qualitative commonality and variability analysis, especially in the case of scarce system documentation. We show how our technique applies to the CAOS domain by using conference proceedings as a key source of information about the evolution of features in CAOS systems over a period of several years. We detect and classify evolution patterns to determine functional commonality and variability. We also identify non-functional requirements to help capture domain variability. We have validated our approach by evaluating the degree to which representative test systems can be covered by the common and variable features produced by our analysis.
Resumo:
One of the current advances in functional biodiversity research is the move away from short-lived test systems towards the exploration of diversity-ecosystem functioning relationships in structurally more complex ecosystems. In forests, assumptions about the functional significance of tree species diversity have only recently produced a new generation of research on ecosystem processes and services. Novel experimental designs have now replaced traditional forestry trials, but these comparatively young experimental plots suffer from specific difficulties that are mainly related to the tree size and longevity. Tree species diversity experiments therefore need to be complemented with comparative observational studies in existing forests. Here we present the design and implementation of a new network of forest plots along tree species diversity gradients in six major European forest types: the FunDivEUROPE Exploratory Platform. Based on a review of the deficiencies of existing observational approaches and of unresolved research questions and hypotheses, we discuss the fundamental criteria that shaped the design of our platform. Key features include the extent of the species diversity gradient with mixtures up to five species, strict avoidance of a dilution gradient, special attention to community evenness and minimal covariation with other environmental factors. The new European research platform permits the most comprehensive assessment of tree species diversity effects on forest ecosystem functioning to date since it offers a common set of research plots to groups of researchers from very different disciplines and uses the same methodological approach in contrasting forest types along an extensive environmental gradient. (C) 2013 Elsevier GmbH. All rights reserved.
Resumo:
Abstract Inhalation of ambient air particles or engineered nanoparticles (NP) handled as powders, dispersions or sprays in industrial processes and contained in consumer products pose a potential and largely unknown risk for incidental exposure. For efficient, economical and ethically sound evaluation of health hazards by inhaled nanomaterials, animal-free and realistic in vitro test systems are desirable. The new Nano Aerosol Chamber for in-vitro Toxicity studies (NACIVT) has been developed and fully characterized regarding its performance. NACIVT features a computer-controlled temperature and humidity conditioning, preventing cellular stress during exposure and allowing long-term exposures. Airborne NP are deposited out of a continuous air stream simultaneously on up to 24 cell cultures on Transwell® inserts, allowing high-throughput screening. In NACIVT, polystyrene as well as silver particles were deposited uniformly and efficiently on all 24 Transwell® inserts. Particle-cell interaction studies confirmed that deposited particles reach the cell surface and can be taken up by cells. As demonstrated in control experiments, there was no evidence for any adverse effects on human bronchial epithelial cells (BEAS-2B) due to the exposure treatment in NACIVT. The new, fully integrated and transportable deposition chamber NACIVT provides a promising tool for reliable, acute and sub-acute dose-response studies of (nano)particles in air-exposed tissues cultured at the air-liquid interface.
Resumo:
11beta-Hydroxysteroid dehydrogenase type 1 (11beta-HSD1), catalyzing the intracellular activation of cortisone to cortisol, is currently considered a promising target to treat patients with metabolic syndrome; hence, there is considerable interest in the development of selective inhibitors. For preclinical tests of such inhibitors, the characteristics of 11beta-HSD1 from the commonly used species have to be known. Therefore, we determined differences in substrate affinity and inhibitor effects for 11beta-HSD1 from six species. The differences in catalytic activities with cortisone and 11-dehydrocorticosterone were rather modest. Human, hamster and guinea-pig 11beta-HSD1 displayed the highest catalytic efficiency in the oxoreduction of cortisone, while mouse and rat showed intermediate and dog the lowest activity. Murine 11beta-HSD1 most efficiently reduced 11-dehydrocorticosterone, while the enzyme from dog showed lower activity than those from the other species. 7-ketocholesterol (7KC) was stereospecifically converted to 7beta-hydroxycholesterol by recombinant 11beta-HSD1 from all species analyzed except hamster, which showed a slight preference for the formation of 7alpha-hydroxycholesterol. Importantly, guinea-pig and canine 11beta-HSD1 displayed very low 7-oxoreductase activities. Furthermore, we demonstrate significant species-specific variability in the potency of various 11beta-HSD1 inhibitors, including endogenous compounds, natural chemicals and pharmaceutical compounds. The results suggest significant differences in the three-dimensional organization of the hydrophobic substrate-binding pocket of 11beta-HSD1, and they emphasize that species-specific variability must be considered in the interpretation of results obtained from different animal experiments. The assessment of such differences, by cell-based test systems, may help to choose the appropriate animal for safety and efficacy studies of novel potential drug candidates.
Resumo:
Among all classes of nanomaterials, silver nanoparticles (AgNPs) have potentially an important ecotoxicological impact, especially in freshwater environments. Fish are particularly susceptible to the toxic effects of silver ions and, with knowledge gaps regarding the contribution of dissolution and unique particle effects to AgNP toxicity, they represent a group of vulnerable organisms. Using cell lines (RTL-W1, RTH-149, RTG-2) and primary hepatocytes of rainbow trout (Oncorhynchus mykiss) as in vitro test systems, we assessed the cytotoxicity of the representative AgNP, NM-300K, and AgNO3 as an Ag+ ion source. Lack of AgNP interference with the cytotoxicity assays (AlamarBlue, CFDA-AM, NRU assay) and their simultaneous application point to the compatibility and usefulness of such a battery of assays. The RTH-149 and RTL-W1 liver cell lines exhibited similar sensitivity as primary hepatocytes towards AgNP toxicity. Leibovitz's L-15 culture medium composition (high amino acid content) had an important influence on the behaviour and toxicity of AgNPs towards the RTL-W1 cell line. The obtained results demonstrate that, with careful consideration, such an in vitro approach can provide valuable toxicological data to be used in an integrated testing strategy for NM-300K risk assessment.
Resumo:
The conversion of prothrombin (FII) to the serine protease, thrombin (FIIa), is a key step in the coagulation cascade because FIIa triggers platelet activation, converts fibrinogen to fibrin, and activates regulatory pathways that both promote and ultimately suppress coagulation. However, several observations suggest that FII may serve a broader physiological role than simply stemming blood loss, including the identification of multiple G protein-coupled, thrombin-activated receptors, and the well-documented mitogenic activity of FIIa in in vitro test systems. To explore in greater detail the physiological roles of FII in vivo, FII-deficient (FII−/−) mice were generated. Inactivation of the FII gene leads to partial embryonic lethality with more than one-half of the FII−/− embryos dying between embryonic days 9.5 and 11.5. Bleeding into the yolk sac cavity and varying degrees of tissue necrosis were observed in many FII−/− embryos within this gestational time frame. However, at least one-quarter of the FII−/− mice survived to term, but ultimately they, too, developed fatal hemorrhagic events and died within a few days of birth. This study directly demonstrates that FII is important in maintaining vascular integrity during development as well as postnatal life.
Resumo:
The phytochemical resveratrol, which is found in grapes and wine, has been reported to have a variety of anti-inflammatory, anti-platelet, and anti-carcinogenic effects. Based on its structural similarity to diethylstilbestrol, a synthetic estrogen, we examined whether resveratrol might be a phytoestrogen. At concentrations (≈3–10 μM) comparable to those required for its other biological effects, resveratrol inhibited the binding of labeled estradiol to the estrogen receptor and it activated transcription of estrogen-responsive reporter genes transfected into human breast cancer cells. This transcriptional activation was estrogen receptor-dependent, required an estrogen response element in the reporter gene, and was inhibited by specific estrogen antagonists. In some cell types (e.g., MCF-7 cells), resveratrol functioned as a superagonist (i.e., produced a greater maximal transcriptional response than estradiol) whereas in others it produced activation equal to or less than that of estradiol. Resveratrol also increased the expression of native estrogen-regulated genes, and it stimulated the proliferation of estrogen-dependent T47D breast cancer cells. We conclude that resveratrol is a phytoestrogen and that it exhibits variable degrees of estrogen receptor agonism in different test systems. The estrogenic actions of resveratrol broaden the spectrum of its biological actions and may be relevant to the reported cardiovascular benefits of drinking wine.
Resumo:
O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.
Resumo:
O objetivo do presente trabalho é a investigação e o desenvolvimento de estratégias de otimização contínua e discreta para problemas de Fluxo de Potência Ótimo (FPO), onde existe a necessidade de se considerar as variáveis de controle associadas aos taps de transformadores em-fase e chaveamentos de bancos de capacitores e reatores shunt como variáveis discretas e existe a necessidade da limitação, e/ou até mesmo a minimização do número de ações de controle. Neste trabalho, o problema de FPO será abordado por meio de três estratégias. Na primeira proposta, o problema de FPO é modelado como um problema de Programação Não Linear com Variáveis Contínuas e Discretas (PNLCD) para a minimização de perdas ativas na transmissão; são propostas três abordagens utilizando funções de discretização para o tratamento das variáveis discretas. Na segunda proposta, considera-se que o problema de FPO, com os taps de transformadores discretos e bancos de capacitores e reatores shunts fixos, possui uma limitação no número de ações de controles; variáveis binárias associadas ao número de ações de controles são tratadas por uma função quadrática. Na terceira proposta, o problema de FPO é modelado como um problema de Otimização Multiobjetivo. O método da soma ponderada e o método ε-restrito são utilizados para modificar os problemas multiobjetivos propostos em problemas mono-objetivos. As variáveis binárias associadas às ações de controles são tratadas por duas funções, uma sigmoidal e uma polinomial. Para verificar a eficácia e a robustez dos modelos e algoritmos desenvolvidos serão realizados testes com os sistemas elétricos IEEE de 14, 30, 57, 118 e 300 barras. Todos os algoritmos e modelos foram implementados em General Algebraic Modeling System (GAMS) e os solvers CONOPT, IPOPT, KNITRO e DICOPT foram utilizados na resolução dos problemas. Os resultados obtidos confirmam que as estratégias de discretização são eficientes e as propostas de modelagem para variáveis binárias permitem encontrar soluções factíveis para os problemas envolvendo as ações de controles enquanto os solvers DICOPT e KNITRO utilizados para modelar variáveis binárias não encontram soluções.
Resumo:
The Introduction gives a brief resume' of the biologically important aspects of 5 -aminoimidazole -4 -carbozamide (1) and explores., in-depth, the synthetic routes to this imidazole. All documented reactions of 5 -aninoimidanole-4 -carboxamide are reviewed in detail, with particular emphasis on the preparation and subsequent coupling reactions of 5 –diazo-imidazole-4 -carboxamide (6). A series of thirteen novel amide 5-amino-2-arylazoimidazole-4-carboxamide derivatives (117-129) were prepared by the coupling of aryldiazonium salts with 5-aminoimidazole-4-carboxamide. Chemical modification of these azo-dyes resulted in the preparation of eight previously unknown acyl derivatives (136-143) Interaction of 5-amino-2-arylazoimidazole-4-carboxides with ethyl formate in sodium ethoxide effected pyrimidine ring closure to the novel 8-arylazohypoxanthines (144 and 145). Several reductive techniques were employed in an effort to obtain the elusive 2,5-diaminoimidazole-4-carboxamide (71),a candidate chemotherapeutic agent, from the arylazoiridazoles. No success can be reported although 5-amino-2-(3-aminoindazol-2-yl) imidazole-4-carboxamide (151) was isolated due to a partial reduction and intramolecular cyclisation of 5-amino72-(2-cyanaphenylazo)imidazole-4-carboxamide (122) .Further possible synthetic approaches to the diaminoimidazole are discussed in Chapter 4. An interesting degradation of a known unstable nitrohydrazone is described in Chapter 5.This resulted in formation of 1, 1-bis(pyrazol--3-ylazo)-1-nitroethane (164) instead of the expected cyclisation to a bicyclic tetrazine N-oxide. An improved preparation of 5-diazoinidazole-4-carboxamide has been achieved, and the diazo-azole formed cycloadducts with isocyanates to yield the hitherto unknown imidazo[5,1-d][1,2,3,5]tetrazin-7(6H)-ones. Eleven derivatives (167-177) of this new ring-system were prepared and characterised. Chemical and spectroscopic investigation showed this ring-system to be unstable under certain conditions, and a comparative study of stability within the group has been made. "Retro-cycloaddition" under protic and photolytic conditions was an unexpected property of 6-substituted imidazo[5,1-d][1,2,3,5]tetrazin--7(0)-ones.Selected examples of the imidazotetrazinone ring-system were tested for antitumour activity. The results of biological evaluation are given in Chapter 7, and have culminated in a Patent application by the collaborating body, May and Baker Ltd. One compound,3-carbamoyl-6-(2-chloro-ethyl)imidazo[5,1-d][1,2,3,5jtetrazin-7(6H)-one (175),shows striking anti-tumour activity in rodent test systems.
Resumo:
In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
The intestinal mucosa is the first biological barrier encountered by natural toxins, and could possibly be exposed to high amounts of dietary mycotoxins. Patulin (PAT), a mycotoxin produced by Penicillium spp. during fruit spoilage, is one of the best known enteropathogenic mycotoxins able to alter functions of the intestine (Maresca et al., 2008). This study evaluated the effects of PAT on barrier function of the gut mucosa utilizing the intestinal epithelial cell model Caco-2, and scrutinized immunomodulatory effects using human peripheral blood mononuclear cells (PBMC) and human blood monocyte-derived dendritic cells (moDCs) as test systems. PAT exposure reduced Caco-2 cell viability at concentrations above 12 mM. As expected, the integrity of a polarized Caco-2 monolayer was affected by PAT exposure, as demonstrated by a decrease in TER values, becoming more pronounced at 50 mM. No effects were detected on the expression levels of the tight junction proteins occludin, claudin-1 and claudin-3 at 50 mM. However, the expression of zonula occludens-1 (ZO-1) and myosin light chain 2 (MLC2) declined. Also, levels of phospho-MLC2 (p-MLC2) increased after 24 h of exposure to 50 mM of PAT. T cell proliferation was highly sensitive to PAT with major effects for concentrations above 10 nM of PAT. The same conditions did not affect the maturation of moDC. PAT causes a reduction in Caco-2 barrier function mainly by perturbation of ZO-1 levels and the phosphorylation of MLC. Low doses of PAT strongly inhibited T cell proliferation induced by a polyclonal activator, but had no effect on the maturation of moDC. These results provide new information that strengthens the concept that the epithelium and immune cells of the intestinal mucosa are important targets for the toxic effects of food contaminants like mycotoxins
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.