977 resultados para binary data
Resumo:
13th Mediterranean Congress of Chemical Engineering (Sociedad Española de Química Industrial e Ingeniería Química, Fira Barcelona, Expoquimia), Barcelona, September 30-October 3, 2014
Resumo:
The aim of this report is to discuss the method of determination of lattice-fluid binary interaction parameters by comparing well characterized immiscible blends and block copolymers of poly(methyl methacrylate) (PMMA) and poly(ϵ−caprolactone) (PCL). Experimental pressure-volume-temperature (PVT) data in the liquid state were correlated with the Sanchez—Lacombe (SL) equation of state with the scaling parameters for mixtures and copolymers obtained through combination rules of the characteristic parameters for the pure homopolymers. The lattice-fluid binary parameters for energy and volume were higher than those of block copolymers implying that the copolymers were more compatible due to the chemical links between the blocks. Therefore, a common parameter cannot account for both homopolymer blend and block copolymer phase behaviors based on current theory. As we were able to adjust all data of the mixtures with a single set of lattice-binary parameters and all data of the block copolymers with another single set we can conclude that both parameters did not depend on the composition for this system. This characteristic, plus the fact that the additivity law of specific volumes can be suitably applied for this system, allowed us to model the behavior of the immiscible blend with the SL equation of state. In addition, a discussion on the relationship between lattice-fluid binary parameters and the Flory–Huggins interaction parameter obtained from Leibler's theory is presented.
Resumo:
Context. It appears that most (if not all) massive stars are born in multiple systems. At the same time, the most massive binaries are hard to find owing to their low numbers throughout the Galaxy and the implied large distances and extinctions. Aims. We want to study LS III +46 11, identified in this paper as a very massive binary; another nearby massive system, LS III +46 12; and the surrounding stellar cluster, Berkeley 90. Methods. Most of the data used in this paper are multi-epoch high S/N optical spectra, although we also use Lucky Imaging and archival photometry. The spectra are reduced with dedicated pipelines and processed with our own software, such as a spectroscopic-orbit code, CHORIZOS, and MGB. Results. LS III +46 11 is identified as a new very early O-type spectroscopic binary [O3.5 If* + O3.5 If*] and LS III +46 12 as another early O-type system [O4.5 V((f))]. We measure a 97.2-day period for LS III +46 11 and derive minimum masses of 38.80 ± 0.83 M⊙ and 35.60 ± 0.77 M⊙ for its two stars. We measure the extinction to both stars, estimate the distance, search for optical companions, and study the surrounding cluster. In doing so, a variable extinction is found as well as discrepant results for the distance. We discuss possible explanations and suggest that LS III +46 12 may be a hidden binary system where the companion is currently undetected.
Resumo:
Context. Since its launch, the X-ray and γ-ray observatory INTEGRAL satellite has revealed a new class of high-mass X-ray binaries (HMXB) displaying fast flares and hosting supergiant companion stars. Optical and infrared (OIR) observations in a multi-wavelength context are essential to understand the nature and evolution of these newly discovered celestial objects. Aims. The goal of this multiwavelength study (from ultraviolet to infrared) is to characterise the properties of IGR J16465−4507, to confirm its HMXB nature and that it hosts a supergiant star. Methods. We analysed all OIR, photometric and spectroscopic observations taken on this source, carried out at ESO facilities. Results. Using spectroscopic data, we constrained the spectral type of the companion star between B0.5 and B1 Ib, settling the debate on the true nature of this source. We measured a high rotation velocity of v = 320 ± 8km s-1 from fitting absorption and emission lines in a stellar spectral model. We then built a spectral energy distribution from photometric observations to evaluate the origin of the different components radiating at each energy range. Conclusions. We finally show that, having accurately determined the spectral type of the early-B supergiant in IGR J16465−4507, we firmly support its classification as an intermediate supergiant fast X-ray transient (SFXT).
Resumo:
Eleven sediment samples taken downcore and representing the past 26 kyr of deposition at MANOP site C (0°57.2°N, 138°57.3°W) were analyzed for lipid biomarker composition. Biomarkers of both terrestrial and marine sources of organic carbon were identified. In general, concentration profiles for these biomarkers and for total organic carbon (TOC) displayed three common stratigraphic features in the time series: (1) a maximum within the surface sediment mixed layer (<=4 ka); (2) a broad minimum extending throughout the interglacial deposit; and (3) a deep, pronounced maximum within the glacial deposit. Using the biomarker records, a simple binary mixing model is described that assesses the proportion of terrestrial to marine TOC in these sediments. Best estimates from this model suggest that ~20% of the TOC is land-derived, introduced by long-range eolian transport, and the remainder is derived from marine productivity. The direct correlation between the records for terrestrial and marine TOC with depth in this core fits an interpretation that primary productivity at site C has been controlled by wind-driven upwelling at least over the last glacial/interglacial cycle. The biomarker records place the greatest wind strength and highest primary productivity within the time frame of 18 to 22 kyr B.P. Diagenetic effects limit our ability to ascertain directly from the biomarker records the absolute magnitude that different types of primary productivity have changed at this ocean location over the past 26 kyr.
Resumo:
The use of a fully parametric Bayesian method for analysing single patient trials based on the notion of treatment 'preference' is described. This Bayesian hierarchical modelling approach allows for full parameter uncertainty, use of prior information and the modelling of individual and patient sub-group structures. It provides updated probabilistic results for individual patients, and groups of patients with the same medical condition, as they are sequentially enrolled into individualized trials using the same medication alternatives. Two clinically interpretable criteria for determining a patient's response are detailed and illustrated using data from a previously published paper under two different prior information scenarios. Copyright (C) 2005 John Wiley & Sons, Ltd.
Resumo:
Computer modelling promises to. be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The 'spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/- 50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations. (c) 2006 Published by Elsevier B.V.
Resumo:
Hannenhalli and Pevzner developed the first polynomial-time algorithm for the combinatorial problem of sorting of signed genomic data. Their algorithm solves the minimum number of reversals required for rearranging a genome to another when gene duplication is nonexisting. In this paper, we show how to extend the Hannenhalli-Pevzner approach to genomes with multigene families. We propose a new heuristic algorithm to compute the reversal distance between two genomes with multigene families via the concept of binary integer programming without removing gene duplicates. The experimental results on simulated and real biological data demonstrate that the proposed algorithm is able to find the reversal distance accurately. ©2005 IEEE
Resumo:
In this paper we present an efficient k-Means clustering algorithm for two dimensional data. The proposed algorithm re-organizes dataset into a form of nested binary tree*. Data items are compared at each node with only two nearest means with respect to each dimension and assigned to the one that has the closer mean. The main intuition of our research is as follows: We build the nested binary tree. Then we scan the data in raster order by in-order traversal of the tree. Lastly we compare data item at each node to the only two nearest means to assign the value to the intendant cluster. In this way we are able to save the computational cost significantly by reducing the number of comparisons with means and also by the least use to Euclidian distance formula. Our results showed that our method can perform clustering operation much faster than the classical ones. © Springer-Verlag Berlin Heidelberg 2005
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
Binary distributed representations of vector data (numerical, textual, visual) are investigated in classification tasks. A comparative analysis of results for various methods and tasks using artificial and real-world data is given.
Resumo:
Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.
Resumo:
AMS Subj. Classification: 62P10, 62H30, 68T01
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^