883 resultados para Complex polymerization method
Resumo:
In this paper, my aim is to address the twin concerns raised in this session - models of practice and geographies or spaces of practice - through regarding a selection of works and processes that have arisen from my recent research. Setting up this discussion, I first present a short critique of the idea of models of creative practice, recognising possible problems with the attempt to generalise or abstract its complexities. Working through a series of portraits of my working environment, I will draw from Lefebvre’s Rhythmanalysis as a way of understanding an art practice both spatially and temporally, suggesting that changes and adjustments can occur through attending to both intuitions and observations of the complex of rhythmic layers constantly at play in any event. Reflecting on my recent studio practice I explore these rhythms through the evocation of a twin axis: the horizontal and the vertical and the arcs of difference or change that occur between them, in both spatial and temporal senses. What this analysis suggests is the idea that understanding does not only emerge from the construction of general principles, derived from observation of the particular, but that the study of rhythms allows us to maintain the primacy of the particular. This makes it well suited to a study of creative methods and objects, since it is to the encounter with and expression of the particular that art practices, most certainly my own, are frequently directed.
Resumo:
This paper presents an efficient noniterative method for distribution state estimation using conditional multivariate complex Gaussian distribution (CMCGD). In the proposed method, the mean and standard deviation (SD) of the state variables is obtained in one step considering load uncertainties, measurement errors, and load correlations. In this method, first the bus voltages, branch currents, and injection currents are represented by MCGD using direct load flow and a linear transformation. Then, the mean and SD of bus voltages, or other states, are calculated using CMCGD and estimation of variance method. The mean and SD of pseudo measurements, as well as spatial correlations between pseudo measurements, are modeled based on the historical data for different levels of load duration curve. The proposed method can handle load uncertainties without using time-consuming approaches such as Monte Carlo. Simulation results of two case studies, six-bus, and a realistic 747-bus distribution network show the effectiveness of the proposed method in terms of speed, accuracy, and quality against the conventional approach.
Resumo:
For wind farm optimizations with lands belonging to different owners, the traditional penalty method is highly dependent on the type of wind farm land division. The application of the traditional method can be cumbersome if the divisions are complex. To overcome this disadvantage, a new method is proposed in this paper for the first time. Unlike the penalty method which requires the addition of penalizing term when evaluating the fitness function, it is achieved through repairing the infeasible solutions before fitness evaluation. To assess the effectiveness of the proposed method on the optimization of wind farm, the optimizing results of different methods are compared for three different types of wind farm division. Different wind scenarios are also incorporated during optimization which includes (i) constant wind speed and wind direction; (ii) various wind speed and wind direction, and; (iii) the more realisticWeibull distribution. Results show that the performance of the new method varies for different land plots in the tested cases. Nevertheless, it is found that optimum or at least close to optimum results can be obtained with sequential land plot study using the new method for all cases. It is concluded that satisfactory results can be achieved using the proposed method. In addition, it has the advantage of flexibility in managing the wind farm design, which not only frees users to define the penalty parameter but without limitations on the wind farm division.
Resumo:
Purified proteins are mandatory for molecular, immunological and cellular studies. However, purification of proteins from complex mixtures requires specialised chromatography methods (i.e., gel filtration, ion exchange, etc.) using fast protein liquid chromatography (FPLC) or high-performance liquid chromatography (HPLC) systems. Such systems are expensive and certain proteins require two or more different steps for sufficient purity and generally result in low recovery. The aim of this study was to develop a rapid, inexpensive and efficient gel-electrophoresis-based protein purification method using basic and readily available laboratory equipment. We have used crude rye grass pollen extract to purify the major allergens Lol p 1 and Lol p 5 as the model protein candidates. Total proteins were resolved on large primary gel and Coomassie Brilliant Blue (CBB)-stained Lol p 1/5 allergens were excised and purified on a secondary "mini"-gel. Purified proteins were extracted from unstained separating gels and subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and immunoblot analyses. Silver-stained SDS-PAGE gels resolved pure proteins (i.e., 875 μg of Lol p 1 recovered from a 8 mg crude starting material) while immunoblot analysis confirmed immunological reactivity of the purified proteins. Such a purification method is rapid, inexpensive, and efficient in generating proteins of sufficient purity for use in monoclonal antibody (mAb) production, protein sequencing and general molecular, immunological, and cellular studies.
Resumo:
Objective: To illustrate a new method for simplifying patient recruitment for advanced prostate cancer clinical trials using natural language processing techniques. Background: The identification of eligible participants for clinical trials is a critical factor to increase patient recruitment rates and an important issue for discovery of new treatment interventions. The current practice of identifying eligible participants is highly constrained due to manual processing of disparate sources of unstructured patient data. Informatics-based approaches can simplify the complex task of evaluating patient’s eligibility for clinical trials. We show that an ontology-based approach can address the challenge of matching patients to suitable clinical trials. Methods: The free-text descriptions of clinical trial criteria as well as patient data were analysed. A set of common inclusion and exclusion criteria was identified through consultations with expert clinical trial coordinators. A research prototype was developed using Unstructured Information Management Architecture (UIMA) that identified SNOMED CT concepts in the patient data and clinical trial description. The SNOMED CT concepts model the standard clinical terminology that can be used to represent and evaluate patient’s inclusion/exclusion criteria for the clinical trial. Results: Our experimental research prototype describes a semi-automated method for filtering patient records using common clinical trial criteria. Our method simplified the patient recruitment process. The discussion with clinical trial coordinators showed that the efficiency in patient recruitment process measured in terms of information processing time could be improved by 25%. Conclusion: An UIMA-based approach can resolve complexities in patient recruitment for advanced prostate cancer clinical trials.
Resumo:
A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations
Resumo:
Genome-wide association studies (GWASs) have been successful at identifying single-nucleotide polymorphisms (SNPs) highly associated with common traits; however, a great deal of the heritable variation associated with common traits remains unaccounted for within the genome. Genome-wide complex trait analysis (GCTA) is a statistical method that applies a linear mixed model to estimate phenotypic variance of complex traits explained by genome-wide SNPs, including those not associated with the trait in a GWAS. We applied GCTA to 8 cohorts containing 7096 case and 19 455 control individuals of European ancestry in order to examine the missing heritability present in Parkinson's disease (PD). We meta-analyzed our initial results to produce robust heritability estimates for PD types across cohorts. Our results identify 27% (95% CI 17-38, P = 8.08E - 08) phenotypic variance associated with all types of PD, 15% (95% CI -0.2 to 33, P = 0.09) phenotypic variance associated with early-onset PD and 31% (95% CI 17-44, P = 1.34E - 05) phenotypic variance associated with late-onset PD. This is a substantial increase from the genetic variance identified by top GWAS hits alone (between 3 and 5%) and indicates there are substantially more risk loci to be identified. Our results suggest that although GWASs are a useful tool in identifying the most common variants associated with complex disease, a great deal of common variants of small effect remain to be discovered. © Published by Oxford University Press 2012.
Resumo:
We present a generalization of the finite volume evolution Galerkin scheme [M. Lukacova-Medvid'ova,J. Saibertov'a, G. Warnecke, Finite volume evolution Galerkin methods for nonlinear hyperbolic systems, J. Comp. Phys. (2002) 183 533-562; M. Luacova-Medvid'ova, K.W. Morton, G. Warnecke, Finite volume evolution Galerkin (FVEG) methods for hyperbolic problems, SIAM J. Sci. Comput. (2004) 26 1-30] for hyperbolic systems with spatially varying flux functions. Our goal is to develop a genuinely multi-dimensional numerical scheme for wave propagation problems in a heterogeneous media. We illustrate our methodology for acoustic waves in a heterogeneous medium but the results can be generalized to more complex systems. The finite volume evolution Galerkin (FVEG) method is a predictor-corrector method combining the finite volume corrector step with the evolutionary predictor step. In order to evolve fluxes along the cell interfaces we use multi-dimensional approximate evolution operator. The latter is constructed using the theory of bicharacteristics under the assumption of spatially dependent wave speeds. To approximate heterogeneous medium a staggered grid approach is used. Several numerical experiments for wave propagation with continuous as well as discontinuous wave speeds confirm the robustness and reliability of the new FVEG scheme.
Resumo:
A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP2O7]4H(2)O and [Co(en)(2)H2P3O10]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.
Resumo:
This article deals with the kinetics and mechanism of acrylonitrile (AN) polymerization initiated by Cu(II)-4-anilino 3-pentene 2-one[Cu(II)ANIPO], Cu(II)-4-p-toluedeno 3-pentene 2-one [Cu(II)TPO], and Cu(II)-4-p-nitroanilino 3-pentene 2-one [Cu(II)NAPO] in bulk at 60°C. The polymerization is free radical in nature. The exponent of initiator(I) is 0.5. The initiation step is a complex formation between the chelate and monomer and subsequent decomposition of the intermediate complex giving rise to free radical and Cu(I). This is substantiated by ultraviolet (UV) and electron spin resonance (ESR) studies. The activation energies and kinetic and chain transfer constants have also been evaluated.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.
Resumo:
Since 1997 the Finnish Jabal Haroun Project (FJHP) has studied the ruins of the monastery and pilgrimage complex (Gr. oikos) of Aaron located on a plateau of the Mountain of Prophet Aaron, Jabal an-Nabi Harûn, ca. 5 km to the south-west of the UNESCO World Heritage site of Petra in Jordan. The state of conservation and the damaging processes affecting the stone structures of the site are studied in this M.A. thesis. The chapel was chosen as an example, as it represents the phasing and building materials of the entire site. The aim of this work is to act as a preliminary study with regards to the planning of long-term conservation at the site. The research is empirical in nature. The condition of the stones in the chapel walls was mapped using the Illustrated Glossary on Stone Deterioration, by the ICOMOS International Scientific Committee for Stone. This glossary combines several standards and systems of damage mapping used in the field. Climatic conditions (temperature and RH %) were monitored for one year (9/2005-8/2006) using a HOBO Microstation datalogger. The measurements were compared with contemporary measurements from the nearest weather station in Wadi Musa. Salts in the stones were studied by taking samples from the stone surfaces by scraping and with the “Paper Pulp”-method; with a poultice of wet cellulose fiber (Arbocel BC1000) and analyzing what main types of salts were to be found in the samples. The climatic conditions on the mountain were expected to be rapidly changing and to differ clearly from conditions in the neighboring areas. The rapid changes were confirmed, but the values did not differ as much as expected from those nearby: the 12 months monitored had average temperatures and were somewhat drier than average. Earlier research in the area has shown that the geological properties of the stone material influence its deterioration. The damage mapping showed clearly, that salts are also a major reason for stone weathering. The salt samples contained several salt combinations, whose behavior in the extremely unstable climatic conditions is difficult to predict. Detailed mapping and regular monitoring of especially the structures, that are going remain exposed, is recommended in this work.
Resumo:
The free radical polymerization of acrylonitrile (AN) initiated by Cu(I1) 4-anilino 3-pentene 2-one [Cu(II) ANIPO] Cu(II), 4-p-toluedeno 3-pentene 2-one [Cu(II) TPO], and Cu(I1) 4-p-nitroanilino 3-pentene 2-one [Cu(II) NAPO] was studied in benzene at 50 and 60°C and in carbon tetrachloride (CCld), dimethyl sulfoxide (DMSO), and methanol (MeOH) at 60°C. Although the polymerization proceeded in a heterogeneous phase, it followed the kinetics of a homogeneous process. The monomer exponents were 22 at two different temperatures and in different solvents. The square-root dependence of R, on initiator concentration and higher monomer exponents accounted for a 1:2 complex formation between the chelate and monomer. The complex formatign was shown by ultraviolet (UV) study. The activation energies, kinetics, and chain transfer constants were also evaluated.
Resumo:
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.
Resumo:
Introduction Systematic review authors are increasingly directing their attention to not only ensuring the robust processes and methods of their syntheses, but also to facilitating the use of their reviews by public health decision-makers and practitioners. This latter activity is known by several terms including knowledge translation, for which one definition is a ‘dynamic and iterative process that includes synthesis, exchange and ethically sound application of knowledge’.1 Unfortunately—and despite good intentions—the successful translation of knowledge has at times been inhibited by the failure of reviews to meet the needs of decision-makers, and the limitations of the traditional avenues by which reviews are disseminated.2 Encouraging the utilization of reviews by the public health workforce is a complex challenge. An unsupportive culture within the workforce, a lack of experience in assessing evidence, the use of traditional academic language in communication and the lack of actionable messages can all act as barriers to successful knowledge translation.3 Improving communication through developing strategies that include summaries, podcasts, webinars and translational tools which target key decision-makers such as HealthEvidence.org should be considered by authors as promising actions to support the uptake of reviews into practice.4,5 Earlier work has also suggested that to better meet the research evidence needs of public health professionals, authors should aim to produce syntheses that are actionable, relevant and timely.2 Further, review authors must interact more with those who will, or could use their reviews; particularly when determining the scope and questions to which a review will be directed.2 Unfortunately, individual engagement, ideal for examining complex issues and addressing particular concerns, is often difficult, particularly when attempting to reach large groups where for efficiency purposes, the strategy tends to be didactic, ‘lecturing’ and therefore less likely to change attitudes or encourage higher order thinking.6 …