104 resultados para Matrix of complex negotiation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Treatment of of (R,R)-N,N-salicylidene cyclohexane 1,2-diamine(H(2)L(1)) in methanol with aqueous NH(4)VO(3) solution in perchloric acid medium affords the mononuclear oxovanadium(V) complex [VOL(1)(MeOH)]-ClO(4) (1) as deep blue solid while the treatment of same solution of (R,R)-N,N-salicylidene cyclohexane 1,2-diamine(H(2)L(1)) with aqueous solution of VOSO(4) leads to the formation of di-(mu-oxo) bridged vanadium(V) complex [VO(2)L(2)](2) (2) as green solid where HL(2) = (R,R)-N-salicylidene cyclohexane 1,2-diamine. The ligand HL(2) is generated in situ by the hydrolysis of one of the imine bonds of HL(1) ligand during the course of formation of complex [VO(2)L(2)](2) (2). Both the compounds have been characterized by single crystal X-ray diffraction as well as spectroscopic methods. Compounds 1 and 2 are to act as catalyst for the catalytic bromide oxidation and C-H bond oxidation in presence of hydrogen peroxide. The representative substrates 2,4-dimethoxy benzoic acid and para-hydroxy benzoic acids are brominated in presence of H(2)O(2) and KBr in acid medium using the above compounds as catalyst. The complexes are also used as catalyst for C-H bond activation of the representative hydrocarbons toluene, ethylbenzene and cyclohexane where hydrogen peroxide acts as terminal oxidant. The yield percentage and turnover number are also quite good for the above catalytic reaction. The oxidized products of hydrocarbons have been characterized by GC Analysis while the brominated products have been characterized by (1)H NMR spectroscopic studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In addition to the expression of recombinant proteins, baculoviruses have been developed as a platform for the display of complex eukaryotic proteins on the surface of virus particles or infected insect cells. Surface display has been used extensively for antigen presentation and targeted gene delivery but is also a candidate for the display of protein libraries for molecular screening. However, although baculovirus gene libraries can be efficiently expressed and displayed on the surface of insect cells, target gene selection is inefficient probably due to super-infection which gives rise to cells expressing more than one protein. In this report baculovirus superinfection of Sf9 cells has been investigated by the use of two recombinant multiple nucleopolyhedrovirus carrying green or red fluorescent proteins under the control of both early and late promoters (vAcBacGFP and vAcBacDsRed). The reporter gene expression was detected 8 hours after the infection of vAcBacGFP and cells in early and late phases of infection could be distinguished by the fluorescence intensity of the expressed protein. Simultaneous infection with vAcBacGFP and vAcBacDsRed viruses each at 0.5 MOI resulted in 80% of infected cells coexpressing the two fluorescent proteins at 48 hours post infection (hpi), and subsequent infection with the two viruses resulted in similar co-infection rate. Most Sf9 cells were re-infectable within the first several hours post infection, but the reinfection rate then decreased to a very low level by 16 hpi. Our data demonstrate that Sf9 cells were easily super-infectable during baculovirus infection, and super-infection could occur simultaneously at the time of the primary infection or subsequently during secondary infection by progeny viruses. The efficiency of super-infection may explain the difficulties of baculovirus display library screening but would benefit the production of complex proteins requiring co-expression of multiple polypeptides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accelerated climate change affects components of complex biological interactions differentially, often causing changes that are difficult to predict. Crop yield and quality are affected by climate change directly, and indirectly, through diseases that themselves will change but remain important. These effects are difficult to dissect and model as their mechanistic bases are generally poorly understood. Nevertheless, a combination of integrated modelling from different disciplines and multi-factorial experimentation will advance our understanding and prioritisation of the challenges. Food security brings in additional socio-economic, geographical and political factors. Enhancing resilience to the effects of climate change is important for all these systems and functional diversity is one of the most effective targets for improved sustainability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reaction Injection Moulding is a technology that enables the rapid production of complex plastic parts directly from a mixture of two reactive materials of low viscosity. The reactants are mixed in specific quantities and injected into a mould. This process allows large complex parts to be produced without the need for high clamping pressures. This chapter explores the simulation of the complex processes involved in reaction injection moulding. The reaction processes mean that the dynamics of the material in the mould are in constant evolution and an effective model which takes full account of these changing dynamics is introduced and incorporated in to finite element procedures, which are able to provide a complete simulation of the cycle of mould filling and subsequent curing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper it is shown that a number of theoretical models of the acoustical properties of rigid frame porous media, especially those involving ratios of Bessel functions of complex argument, can be accurately approximated and greatly simplified by the use of Padé approximation techniques. In the case of the model of Attenborough [J. Acoust. Soc. Am. 81, 93–102 (1987)] rational approximations are produced for the characteristic impedance, propagation constant, dynamic compressibility, and dynamic density, as a function of frequency and the material parameters. The model proposed by Stinson and Champoux

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is predicted that non-communicable diseases will account for over 73 % of global mortality in 2020. Given that the majority of these deaths occur in developed countries such as the UK, and that up to 80 % of chronic disease could be prevented through improvements in diet and lifestyle, it is imperative that dietary guidelines and disease prevention strategies are reviewed in order to improve their efficacy. Since the completion of the human genome project our understanding of complex interactions between environmental factors such as diet and genes has progressed considerably, as has the potential to individualise diets using dietary, phenotypic and genotypic data. Thus, there is an ambition for dietary interventions to move away from population-based guidance towards 'personalised nutrition'. The present paper reviews current evidence for the public acceptance of genetic testing and personalised nutrition in disease prevention. Health and clear consumer benefits have been identified as key motivators in the uptake of genetic testing, with individuals reporting personal experience of disease, such as those with specific symptoms, being more willing to undergo genetic testing for the purpose of personalised nutrition. This greater perceived susceptibility to disease may also improve motivation to change behaviour which is a key barrier in the success of any nutrition intervention. Several consumer concerns have been identified in the literature which should be addressed before the introduction of a nutrigenomic-based personalised nutrition service. Future research should focus on the efficacy and implementation of nutrigenomic-based personalised nutrition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the development of a rapid method with ultraperformance liquid chromatography–tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analyses of plant proanthocyanidins directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymerization step in the ion source of both smaller oligomers and larger polymers. The formed depolymerization products are further fragmented in the collision cell to enable their selective detection. This UPLC-MS/MS method is able to separately quantitate the terminal and extension units of the most common proanthocyanidin subclasses, that is, procyanidins and prodelphinidins. The resulting data enable (1) quantitation of the total proanthocyanidin content, (2) quantitation of total procyanidins and prodelphinidins including the procyanidin/prodelphinidin ratio, (3) estimation of the mean degree of polymerization for the oligomers and polymers, and (4) estimation of how the different procyanidin and prodelphinidin types are distributed along the chromatographic hump typically produced by large proanthocyanidins. All of this is achieved within the 10 min period of analysis, which makes the presented method a significant addition to the chemistry tools currently available for the qualitative and quantitative analyses of complex proanthocyanidin mixtures from plant extracts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human ROCO proteins are a family of multi-domain proteins sharing a conserved ROC-COR supra-domain. The family has four members: leu- cine-rich repeat kinase 1 (LRRK1), leucine-rich repeat kinase 2 (LRRK2), death-associated protein kinase 1 (DAPK1) and malignant fibrous histiocy- toma amplified sequences with leucine-rich tandem repeats 1 (MASL1). Previous studies of LRRK1/2 and DAPK1 have shown that the ROC (Ras of complex proteins) domain can bind and hydrolyse GTP, but the cellular consequences of this activity are still unclear. Here, the first biochemical characterization of MASL1 and the impact of GTP binding on MASL1 complex formation are reported. The results demonstrate that MASL1, similar to other ROCO proteins, can bind guanosine nucleotides via its ROC domain. Furthermore, MASL1 exists in two distinct cellular com- plexes associated with heat shock protein 60, and the formation of a low molecular weight pool of MASL1 is modulated by GTP binding. Finally, loss of GTP enhances MASL1 toxicity in cells. Taken together, these data point to a central role for the ROC/GTPase domain of MASL1 in the reg- ulation of its cellular function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although Ca transport in plants is highly complex, the overexpression of vacuolar Ca2+ transporters in crops is a promising new technology to improve dietary Ca supplies through biofortification. Here, we sought to identify novel targets for increasing plant Ca accumulation using genetical and comparative genomics. Expression quantitative trait locus (eQTL) mapping to 1895 cis- and 8015 trans-loci were identified in shoots of an inbred mapping population of Brassica rapa (IMB211 × R500); 23 cis- and 948 trans-eQTLs responded specifically to altered Ca supply. eQTLs were screened for functional significance using a large database of shoot Ca concentration phenotypes of Arabidopsis thaliana. From 31 Arabidopsis gene identifiers tagged to robust shoot Ca concentration phenotypes, 21 mapped to 27 B. rapa eQTLs, including orthologs of the Ca2+ transporters At-CAX1 and At-ACA8. Two of three independent missense mutants of BraA.cax1a, isolated previously by targeting induced local lesions in genomes, have allele-specific shoot Ca concentration phenotypes compared with their segregating wild types. BraA.CAX1a is a promising target for altering the Ca composition of Brassica, consistent with prior knowledge from Arabidopsis. We conclude that multiple-environment eQTL analysis of complex crop genomes combined with comparative genomics is a powerful technique for novel gene identification/prioritization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Skillful and timely streamflow forecasts are critically important to water managers and emergency protection services. To provide these forecasts, hydrologists must predict the behavior of complex coupled human–natural systems using incomplete and uncertain information and imperfect models. Moreover, operational predictions often integrate anecdotal information and unmodeled factors. Forecasting agencies face four key challenges: 1) making the most of available data, 2) making accurate predictions using models, 3) turning hydrometeorological forecasts into effective warnings, and 4) administering an operational service. Each challenge presents a variety of research opportunities, including the development of automated quality-control algorithms for the myriad of data used in operational streamflow forecasts, data assimilation, and ensemble forecasting techniques that allow for forecaster input, methods for using human-generated weather forecasts quantitatively, and quantification of human interference in the hydrologic cycle. Furthermore, much can be done to improve the communication of probabilistic forecasts and to design a forecasting paradigm that effectively combines increasingly sophisticated forecasting technology with subjective forecaster expertise. These areas are described in detail to share a real-world perspective and focus for ongoing research endeavors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century1,2. Explorations of these uncertainties have so far relied on scaling approaches3,4, large ensembles of simplified climate models1,2, or small ensembles of complex coupled atmosphere–ocean general circulation models5,6 which under-represent uncertainties in key climate system properties derived from independent sources7–9. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report10, but extends towards larger warming than observed in ensemblesof-opportunity5 typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.