920 resultados para Linguistic variables
Resumo:
The overall aim of the project was to study the influence of process variables on the distribution of a model active pharmaceutical ingredient (API) during fluidised melt granulation of pharmaceutical granules with a view of optimising product characteristics. Granules were produced using common pharmaceutical excipients; lactose monohydrate using poly ethylene glycol (PEG1500) as a meltable binder. Methylene blue was used as a model API. Empirical models relating the process variables to the granules properties such as granule mean size, product homogeneity and granule strength were developed using the design of experiment approach. Fluidising air velocity and fluidising air temperature were shown to strongly influence the product properties. Optimisation studies showed that strong granules with homogeneous distribution of the active ingredient can be produced at high fluidising air velocity and at high fluidising air temperatures.
On the complexity of solving polytree-shaped limited memory influence diagrams with binary variables
Resumo:
Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.
Resumo:
The current study sought to assess the importance of three common variables on the outcome of TiO2 photocatalysis experiments with bacteria. Factors considered were (a) ability of test species to withstand osmotic pressure, (b) incubation period of agar plates used for colony counts following photocatalysis and (c) chemical nature of suspension medium used for bacteria and TiO2. Staphylococcus aureus, Escherichia coli, Salmonella ser. Typhimurium and Pseudomonas aeruginosa were found to vary greatly in their ability to withstand osmotic pressure, raising the possibility that osmotic lysis may be contributing to loss of viability in some photocatalytic disinfection studies. Agar plate incubation time was also found to influence results, as bacteria treated with UV light only grew more slowly than those treated with a combination of UV and TiO2. The chemical nature of the suspension medium used was found to have a particularly pronounced effect upon results. Greatest antibacterial activity was detected when aqueous sodium chloride solution was utilised, with ∼1 × 106 CFU mL-1 S. aureus being completely killed after 60 min. Moderate activity was observed when distilled water was employed with bacteria being killed after 2 h and 30 min, and no antibacterial activity at all was detected when aqueous tryptone solution was used. Interestingly, the antibacterial activity of UV light on its own appeared to be very much reduced in experiments where aqueous sodium chloride was employed instead of distilled water.
Resumo:
This paper presents the results of an investigation into the utility of remote sensing (RS) using meteorological satellites sensors and spatial interpolation (SI) of data from meteorological stations, for the prediction of spatial variation in monthly climate across continental Africa in 1990. Information from the Advanced Very High Resolution Radiometer (AVHRR) of the National Oceanic and Atmospheric Administration's (NOAA) polar-orbiting meteorological satellites was used to estimate land surface temperature (LST) and atmospheric moisture. Cold cloud duration (CCD) data derived from the High Resolution Radiometer (HRR) onboard the European Meteorological Satellite programme's (EUMETSAT) Meteosat satellite series were also used as a RS proxy measurement of rainfall. Temperature, atmospheric moisture and rainfall surfaces were independently derived from SI of measurements from the World Meteorological Organization (WMO) member stations of Africa. These meteorological station data were then used to test the accuracy of each methodology, so that the appropriateness of the two techniques for epidemiological research could be compared. SI was a more accurate predictor of temperature, whereas RS provided a better surrogate for rainfall; both were equally accurate at predicting atmospheric moisture. The implications of these results for mapping short and long-term climate change and hence their potential for the study anti control of disease vectors are considered. Taking into account logistic and analytical problems, there were no clear conclusions regarding the optimality of either technique, but there was considerable potential for synergy.
Resumo:
This chapter examines the ramifications of continental travel and associated epistolary communication for English poets of the period. It argues that recourse to neo-Latin, the universal language of diplomacy, served not only to establish a sense of shared space—linguistic, cultural, generic—between England and the continent, but also to signal self-conscious differences (climatic, geographical, historical, political) between England and her continental peers. Through an investigation of a range of ‘performances’ on stages that were ‘academic’, poetic, autobiographical, and epistolographic, it assesses the central role of neo-Latin as a language that underwent a series of textual itineraries. These ‘itineraries’ manifest themselves in a number of ways. Neo-Latin as a shared linguistic medium can facilitate, and quite uniquely so, intertextual engagement with the classics, but now ancient Rome, its language, its mythology, its hierarchy of genres, are viewed through a seventeenth-century lens and appropriated by poets in both England and Italy to describe contemporary events, whether personal, or political. Close examination of the neo-Latin poetry of Milton and Marvell reveals, it is argued, a self-fashioning coloured by such textual itineraries and interchanges. The absorption and replication of continental literary and linguistic methodologies (the academic debate; the etymological play of Marinism; the hybridity of neo-Latin and Italian voices) reveal in short a linguistic and textual reciprocity that gave birth to something very new.
Resumo:
Many high-state non-magnetic cataclysmic variables (CVs) exhibit blueshifted absorption or P-Cygni profiles associated with ultraviolet (UV) resonance lines. These features imply the existence of powerful accretion disc winds in CVs. Here, we use our Monte Carlo ionization and radiative transfer code to investigate whether disc wind models that produce realistic UV line profiles are also likely to generate observationally significant recombination line and continuum emission in the optical waveband. We also test whether outflows may be responsible for the single-peaked emission line profiles often seen in high-state CVs and for the weakness of the Balmer absorption edge (relative to simple models of optically thick accretion discs). We find that a standard disc wind model that is successful in reproducing the UV spectra of CVs also leaves a noticeable imprint on the optical spectrum, particularly for systems viewed at high inclination. The strongest optical wind-formed recombination lines are H alpha and He ii lambda 4686. We demonstrate that a higher density outflow model produces all the expected H and He lines and produces a recombination continuum that can fill in the Balmer jump at high inclinations. This model displays reasonable verisimilitude with the optical spectrum of RW Trianguli. No single-peaked emission is seen, although we observe a narrowing of the double-peaked emission lines from the base of the wind. Finally, we show that even denser models can produce a single-peaked H alpha line. On the basis of our results, we suggest that winds can modify, and perhaps even dominate, the line and continuum emission from CVs.
Resumo:
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands gP1, rP1, iP1, and zP1. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.
Resumo:
New, automated forms of data-analysis are required in order to understand the high-dimensional trajectories that are obtained from molecular dynamics simulations on proteins. Dimensionality reduction algorithms are particularly appealing in this regard as they allow one to construct unbiased, low-dimensional representations of the trajectory using only the information encoded in the trajectory. The downside of this approach is that different sets of coordinates are required for each different chemical systems under study precisely because the coordinates are constructed using information from the trajectory. In this paper we show how one can resolve this problem by using the sketch-map algorithm that we recently proposed to construct a low-dimensional representation of the structures contained in the protein data bank (PDB). We show that the resulting coordinates are as useful for analysing trajectory data as coordinates constructed using landmark configurations taken from the trajectory and that these coordinates can thus be used for understanding protein folding across a range of systems.
Resumo:
We present a homological characterisation of those chain complexes of modules over a Laurent polynomial ring in several indeterminates which are finitely dominated over the ground ring (that is, are a retract up to homotopy of a bounded complex of finitely generated free modules). The main tools, which we develop in the paper, are a non-standard totalisation construction for multi-complexes based on truncated products, and a high-dimensional mapping torus construction employing a theory of cubical diagrams that commute up to specified coherent homotopies.
Resumo:
We present a method for learning Bayesian networks from data sets containing thousands of variables without the need for structure constraints. Our approach is made of two parts. The first is a novel algorithm that effectively explores the space of possible parent sets of a node. It guides the exploration towards the most promising parent sets on the basis of an approximated score function that is computed in constant time. The second part is an improvement of an existing ordering-based algorithm for structure optimization. The new algorithm provably achieves a higher score compared to its original formulation. Our novel approach consistently outperforms the state of the art on very large data sets.