971 resultados para ONE-COMPONENT


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A one step, clean and efficient, conversion of arylaldehydes, ketones and ketals into the corresponding hydrocarbon using ionic hydrogenation conditions employing sodium cyanoborohydride in the presence of two to three equivalents of BF3. OEt(2) is described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrochemical capacitors are electrochemical devices with fast and highly reversible charge-storage and discharge capabilities. The devices are attractive for energy storage particularly in applications involving high-power requirements. Electrochemical capacitors employ two electrodes and an aqueous or a non-aqueous electrolyte, either in liquid or solid form; the latter provides the advantages of compactness, reliability, freedom from leakage of any liquid component and a large operating potential-window. One of the classes of solid electrolytes used in capacitors is polymer-based and they generally consist of dry solid-polymer electrolytes or gel-polymer electrolyte or composite-polymer electrolytes. Dry solid-polymer electrolytes suffer from poor ionic-conductivity values, between 10(-8) and 10(-7) S cm(-1) under ambient conditions, but are safer than gel-polymer electrolytes that exhibit high conductivity of ca. 10(-3) S cm(-1) under ambient conditions. The aforesaid polymer-based electrolytes have the advantages of a wide potential window of ca. 4 V and hence can provide high energy-density. Gel-polymer electrolytes are generally prepared using organic solvents that are environmentally malignant. Hence, replacement of organic solvents with water in gel-polymer electrolytes is desirable which also minimizes the device cost substantially. The water containing gel-polymer electrolytes, called hydrogel-polymer electrolytes, are, however, limited by a low operating potential-window of only about 1.23 V. This article reviews salient features of electrochemical capacitors employing hydrogel-polymer electrolytes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Percutaneous coronary interventions have increased 50% in Australia, yet vascular and cardiac complications remain ongoing outcome issues for patients. Managing complications is confounded by reduced length of patient stay, yet is an integral component of a cardiac nurses’ scope of practice. The aim of this study was to highlight in and out of hospital vascular and cardiac complications, for twelve months post patient discharge after PCI. Prospective data was collected from the hospital angioplasty database from 1089 consecutive patients who had PCI procedures from 1 January 2005 to 31 December 2006. In hospital vascular complications were reported by 391 (35%) of the 1089 patients, following PCI. Of these, 22.4% had haemorrhage only, 7.1% haematoma only. Cardiac complications in hospital were, one death (0.09%) following PCI, three deaths (0.27%) during the same admission and no incidence of myocardial infarction or bypass surgery. Patients who had PCI in 2005 (525) were telephone followed up after discharge at one and twelve months. Surprisingly, ongoing vascular outcomes were noted, with a 2.5% incidence at one month and 4% at 12 months. Cardiac complications were also identified, 51 (9.7%) patients requiring readmission for repeat angiogram, 19 (3.6%) a repeat PCI and 7 (1.3%) patients undergoing bypass surgery. This review highlights that vascular and cardiac problems are ongoing issues for PCI patients both in and out of hospital. The results suggest that cardiac nurses focus more on improving the monitoring and discharge care of patients and families for recovery after PCI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The light distribution in the disks of many galaxies is ‘lopsided’ with a spatial extent much larger along one half of a galaxy than the other, as seen in M101. Recent observations show that the stellar disk in a typical spiral galaxy is significantly lopsided, indicating asymmetry in the disk mass distribution. The mean amplitude of lopsidedness is 0.1, measured as the Fourier amplitude of the m=1 component normalized to the average value. Thus, lopsidedness is common, and hence it is important to understand its origin and dynamics. This is a new and exciting area in galactic structure and dynamics, in contrast to the topic of bars and two-armed spirals (m=2) which has been extensively studied in the literature. Lopsidedness is ubiquitous and occurs in a variety of settings and tracers. It is seen in both stars and gas, in the outer disk and the central region, in the field and the group galaxies. The lopsided amplitude is higher by a factor of two for galaxies in a group. The lopsidedness has a strong impact on the dynamics of the galaxy, its evolution, the star formation in it, and on the growth of the central black hole and on the nuclear fuelling. We present here an overview of the observations that measure the lopsided distribution, as well as the theoretical progress made so far to understand its origin and properties. The physical mechanisms studied for its origin include tidal encounters, gas accretion and a global gravitational instability. The related open, challenging problems in this emerging area are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of techniques for scaling up classifiers so that they can be applied to problems with large datasets of training examples is one of the objectives of data mining. Recently, AdaBoost has become popular among machine learning community thanks to its promising results across a variety of applications. However, training AdaBoost on large datasets is a major problem, especially when the dimensionality of the data is very high. This paper discusses the effect of high dimensionality on the training process of AdaBoost. Two preprocessing options to reduce dimensionality, namely the principal component analysis and random projection are briefly examined. Random projection subject to a probabilistic length preserving transformation is explored further as a computationally light preprocessing step. The experimental results obtained demonstrate the effectiveness of the proposed training process for handling high dimensional large datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present study was to draw inferences regarding the properties of single cells responsible for co-operative behaviour in the slug of the soil amoeba Dictyostelium discoideum. The slug is an integrated multicellular mass formed by the aggregation of starved cells. The amoebae comprising the slug differentiate according to their spatial locations relative to one another, implying that, as in the case of other regulative embryos, they must be in mutual communication. We have previously shown that one manifestation of this communication is the time taken for the anteriormost fragment of the slug, the tip, to regenerate from slugs which have been rendered tipless by amputation. We present results of tip-regeneration experiments performed on genetically mosaic slugs. By comparing the mosaics with their component pure genotypes, we were able to discriminate between a set of otherwise equally plausible modes of intercellular signalling. Neither a'pacemaker' model, in which the overall rate of tip regeneration is determined by the cell with the highest frequency of autonomous oscillation, nor an 'independent-particle' model, in which the rate of regeneration is the arithmetical average of independent cell-dependent rates, is in quantitative accord with our findings. Our results are best explained by a form of signalling which operates by means of cell-to-cell relay. Therefore intercellular communication Seems to be essential for tip regeneration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structure and conformation of a second crystalline modification of 19-nortestosterone has been determined by X-ray methods. M r = 274, monoclinic P2 l, a=9.755(2), b= 11.467(3), c= 14.196(3)/L fl=101.07(2) ° , V=1558.4 (8) A 3, Z=4, Ox= I. 168 g cm -3, Mo Ka, 2 = 0.7107 ,/k, ~ = 0.80 cm -l, F(000) = 600, T= 300 K. R = 0.060 for 2158 observed reflections. The two molecules in the asymmetric unit show significant differences in the A-ring conformation from that of the previously reported form of the title compound [Precigoux, Busetta, Courseille & Hospital (1975). Acta Cryst. B31, 1527-1532]. The l a,2fl-half-chair conformation of the A ring increases its conformational freedom compared with testosterone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional thinkin g holds that increased energy consumption is a prerequisite for economic and social development. This belief, together With the prospect of dwindling global petroleum supplies and the high costs of expanding energy supply generally, lead many to believe that it is not feasible to improve living standards substantially in the developing countries. But by shifting to high-quality energy carriers and by exploiting cost-effective opportunities for more efficient energy use, it would be possible to satisfy basic human needs and to provide considerable further improvements in living standards without significantly increasing per-capita energy use above the present level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. It is suggested, for the probability distribution of the transfer matrix of the conductor, the distribution of maximum information-entropy, constrained by the following physical requirements: 1) flux conservation, 2) time-reversal invariance and 3) scaling, with the length of the conductor, of the two lowest cumulants of ζ, where = sh2ζ. The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study reports a diachronic corpus investigation of common-number pronouns used to convey unknown or otherwise unspecified reference. The study charts agreement patterns in these pronouns in various diachronic and synchronic corpora. The objective is to provide base-line data on variant frequencies and distributions in the history of English, as there are no previous systematic corpus-based observations on this topic. This study seeks to answer the questions of how pronoun use is linked with the overall typological development in English and how their diachronic evolution is embedded in the linguistic and social structures in which they are used. The theoretical framework draws on corpus linguistics and historical sociolinguistics, grammaticalisation, diachronic typology, and multivariate analysis of modelling sociolinguistic variation. The method employs quantitative corpus analyses from two main electronic corpora, one from Modern English and the other from Present-day English. The Modern English material is the Corpus of Early English Correspondence, and the time frame covered is 1500-1800. The written component of the British National Corpus is used in the Present-day English investigations. In addition, the study draws supplementary data from other electronic corpora. The material is used to compare the frequencies and distributions of common-number pronouns between these two time periods. The study limits the common-number uses to two subsystems, one anaphoric to grammatically singular antecedents and one cataphoric, in which the pronoun is followed by a relative clause. Various statistical tools are used to process the data, ranging from cross-tabulations to multivariate VARBRUL analyses in which the effects of sociolinguistic and systemic parameters are assessed to model their impact on the dependent variable. This study shows how one pronoun type has extended its uses in both subsystems, an increase linked with grammaticalisation and the changes in other pronouns in English through the centuries. The variationist sociolinguistic analysis charts how grammaticalisation in the subsystems is embedded in the linguistic and social structures in which the pronouns are used. The study suggests a scale of two statistical generalisations of various sociolinguistic factors which contribute to grammaticalisation and its embedding at various stages of the process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information structure and Kabyle constructions Three sentence types in the Construction Grammar framework The study examines three Kabyle sentence types and their variants. These sentence types have been chosen because they code the same state of affairs but have different syntactic structures. The sentence types are Dislocated sentence, Cleft sentence, and Canonical sentence. I argue first that a proper description of these sentence types should include information structure and, second, that a description which takes into account information structure is possible in the Construction Grammar framework. The study thus constitutes a testing ground for Construction Grammar for its applicability to a less known language. It constitutes a testing ground notably because the differentiation between the three types of sentences cannot be done without information structure categories and, consequently, these categories must be integrated also in the grammatical description. The information structure analysis is based on the model outlined by Knud Lambrecht. In that model, information structure is considered as a component of sentence grammar that assures the pragmatically correct sentence forms. The work starts by an examination of the three sentence types and the analyses that have been done in André Martinet s functional grammar framework. This introduces the sentence types chosen as the object of study and discusses the difficulties related to their analysis. After a presentation of the state of the art, including earlier and more recent models, the principles and notions of Construction Grammar and of Lambrecht s model are introduced and explicated. The information structure analysis is presented in three chapters, each treating one of the three sentence types. The analyses are based on spoken language data and elicitation. Prosody is included in the study when a syntactic structure seems to code two different focus structures. In such cases, it is pertinent to investigate whether these are coded by prosody. The final chapter presents the constructions that have been established and the problems encountered in analysing them. It also discusses the impact of the study on the theories used and on the theory of syntax in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.