959 resultados para Binary
Resumo:
Black hole X-ray binaries, binary systems where matter from a companion star is accreted by a stellar mass black hole, thereby releasing enormous amounts of gravitational energy converted into radiation, are seen as strong X-ray sources in the sky. As a black hole can only be detected via its interaction with its surroundings, these binary systems provide important evidence for the existence of black holes. There are now at least twenty cases where the measured mass of the X-ray emitting compact object in a binary exceeds the upper limit for a neutron star, thus inferring the presence of a black hole. These binary systems serve as excellent laboratories not only to study the physics of accretion but also to test predictions of general relativity in strongly curved space time. An understanding of the accretion flow onto these, the most compact objects in our Universe, is therefore of great importance to physics. We are only now slowly beginning to understand the spectra and variability observed in these X-ray sources. During the last decade, a framework has developed that provides an interpretation of the spectral evolution as a function of changes in the physics and geometry of the accretion flow driven by a variable accretion rate. This doctoral thesis presents studies of two black hole binary systems, Cygnus~X-1 and GRS~1915+105, plus the possible black hole candidate Cygnus~X-3, and the results from an attempt to interpret their observed properties within this emerging framework. The main result presented in this thesis is an interpretation of the spectral variability in the enigmatic source Cygnus~X-3, including the nature and accretion geometry of its so-called hard spectral state. The results suggest that the compact object in this source, which has not been uniquely identified as a black hole on the basis of standard mass measurements, is most probably a massive, ~30 Msun, black hole, and thus the most massive black hole observed in a binary in our Galaxy so far. In addition, results concerning a possible observation of limit-cycle variability in the microquasar GRS~1915+105 are presented as well as evidence of `mini-hysteresis' in the extreme hard state of Cygnus X-1.
Resumo:
In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.
Resumo:
For achieving efficient fusion energy production, the plasma-facing wall materials of the fusion reactor should ensure long time operation. In the next step fusion device, ITER, the first wall region facing the highest heat and particle load, i.e. the divertor area, will mainly consist of tiles based on tungsten. During the reactor operation, the tungsten material is slowly but inevitably saturated with tritium. Tritium is the relatively short-lived hydrogen isotope used in the fusion reaction. The amount of tritium retained in the wall materials should be minimized and its recycling back to the plasma must be unrestrained, otherwise it cannot be used for fueling the plasma. A very expensive and thus economically not viable solution is to replace the first walls quite often. A better solution is to heat the walls to temperatures where tritium is released. Unfortunately, the exact mechanisms of hydrogen release in tungsten are not known. In this thesis both experimental and computational methods have been used for studying the release and retention of hydrogen in tungsten. The experimental work consists of hydrogen implantations into pure polycrystalline tungsten, the determination of the hydrogen concentrations using ion beam analyses (IBA) and monitoring the out-diffused hydrogen gas with thermodesorption spectrometry (TDS) as the tungsten samples are heated at elevated temperatures. Combining IBA methods with TDS, the retained amount of hydrogen is obtained as well as the temperatures needed for the hydrogen release. With computational methods the hydrogen-defect interactions and implantation-induced irradiation damage can be examined at the atomic level. The method of multiscale modelling combines the results obtained from computational methodologies applicable at different length and time scales. Electron density functional theory calculations were used for determining the energetics of the elementary processes of hydrogen in tungsten, such as diffusivity and trapping to vacancies and surfaces. Results from the energetics of pure tungsten defects were used in the development of an classical bond-order potential for describing the tungsten defects to be used in molecular dynamics simulations. The developed potential was utilized in determination of the defect clustering and annihilation properties. These results were further employed in binary collision and rate theory calculations to determine the evolution of large defect clusters that trap hydrogen in the course of implantation. The computational results for the defect and trapped hydrogen concentrations were successfully compared with the experimental results. With the aforedescribed multiscale analysis the experimental results within this thesis and found in the literature were explained both quantitatively and qualitatively.
Resumo:
The increased availability of image capturing devices has enabled collections of digital images to rapidly expand in both size and diversity. This has created a constantly growing need for efficient and effective image browsing, searching, and retrieval tools. Pseudo-relevance feedback (PRF) has proven to be an effective mechanism for improving retrieval accuracy. An original, simple yet effective rank-based PRF mechanism (RB-PRF) that takes into account the initial rank order of each image to improve retrieval accuracy is proposed. This RB-PRF mechanism innovates by making use of binary image signatures to improve retrieval precision by promoting images similar to highly ranked images and demoting images similar to lower ranked images. Empirical evaluations based on standard benchmarks, namely Wang, Oliva & Torralba, and Corel datasets demonstrate the effectiveness of the proposed RB-PRF mechanism in image retrieval.
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
- Background Sonography is an important diagnostic tool in children with suspected appendicitis. Reported accuracy and appendiceal visualisation rates vary significantly, as does the management of equivocal ultrasound findings. The aim of this study was to audit appendiceal sonography at a tertiary children's hospital, and provide baseline data for a future prospective study. - Summary of work Records of children who underwent ultrasound studies for possible appendicitis between January 2008 and December 2010 were reviewed. Variables included patient demographics, sonographic appendix characteristics, and secondary signs. Descriptive statistics and analysis using ANOVA, Mann-Whitney U test, and ROC curves were performed. Mater Human Research Ethic Committee approval was granted. - Summary of results There were 457 eligible children. Using a dichotomous diagnostic model (including equivocal results), sensitivity was 89.6%, specificity 91.6%, and diagnostic yield of 40.7%. ROC curve analysis of a 6mm diameter cut-off was 0.88 AUC (95% CI 0.80 to 0.95). - Discussion and conclusions Sonography is an accurate test for acute appendicitis in children, with a high sensitivity and negative predictive value. A diameter of 6mm as an absolute cut-off in a binary model can lead to false findings. Results were compared with available literature. Recent publications propose categorising diameter1 and integrating secondary signs2 to improve accuracy and provide more meaningful results to clinicians. This study will be a benchmark for future studies with multiple diagnostic categorisation.
Resumo:
The viscosity of five binary gas mixtures - namely, oxygen-hydrogen, oxygen-nitrogen, oxygen-carbon dioxide, carbon dioxide-nitrogen, carbon dioxide-hydrogen - and two ternary mixtures - oxygen-nitrogen-carbon dioxide and oxygen-hydrogen-carbon dioxide - were determined at ambient temperature and pressure using an oscillating disk viscometer. The theoretical expressions of several investigators were in good agreement with the experimental results obtained with this viscometer. In the case of the ternary gas mixture oxygen-carbon dioxide-nitrogen, as long as the volumetric ratio of oxygen to carbon dioxide in the mixture was maintained at 11 to 8, the viscosity of the ternary mixture at ambient temperature and pressure remained constant irrespective of the percentage of nitrogen present in the mixture.
Resumo:
The subject matter of this study is the cultural knowledge concerning romantic male-female relationships in autobiographies written by so called ordinary Finnish men and women born between 1901 and 1965. The research data (98 autobiographies) is selected from two collections by the Finnish Literature Society s folklore archives in the early 1990 s. Autobiographies are cultural representations where negotiation of shared cultural models and personal meanings given to hetero-relationship is evident in an interesting manner. In this research I analyze autobiographies as a written folklore genre. Information concerning male-female relationships is being analyzed using theoretically informed close readings thematic analysis, intertextual reading and reflexive reading. Theoretical implications stem from cognitive anthropology (the idea of cultural models) and an adaptation of discourse theory inspired by Michel Foucault. The structure of the analysis follows the structure of the shared knowledge concerning romantic male-female relationship: the first phase of analysis presents the script of a hetero-relationship and then moves into the actual structure, the cultural model of a relationship. The components of the model of relationship are, as mentioned in the title of the research, woman, man, love and sex. The research shows that all the writers share this basic knowledge concerning a heterosexual relationship despite their age, background or gender. Also the conflicts described and experienced in the relationships of the writers were similar throughout the timespan of the early 1900 s to 1990 s: lack of love, inability to reconcile sexual desires, housework, sharing the responsibility of childcare and financial problems. The research claims that the conflicts in relationships are a major cause for the binary view on gender. When relationships are harmonious, there seems to be no need to see men and women as opposites. The research names five important discourses present in the meaning giving processes of autobiographers. In doing so, the stabile cultural model of male-female relationship widens to show the complexity and variation in data. In this way it is possible to detect some age and gender specific shifts and emphasis. The discourses give meaning to the components of the cultural model and determine the contents of womanhood, manhood, sexuality and love. The way these discourses are spread and their authority are different: the romantic discourse evident in the autobiographies appeal to the authority of love supreme love is the purpose of male-female relationship and it justifies sexuality. In this discourse sex can be the place for confluence of genders. The ideas of romantic love are widely spread in popular culture. Popular scientific discourse defines a relationship as a site to become a man and a woman either from a psychological or a biological point of view. Genders are seen as opposites. These ideas are often presented in media and their authority in science which is seen as infallible. The Christian discourse defines men and women: both should work for the benefit of the nuclear family under the undisputed authority of God. Marital love is based on Christian virtues and within marriage sexuality is acceptable. The discourse I ve named folk tradition defines women and men as guardians of home and offspring. The authority of folk tradition comes from universal truth based in experience and truths known to the mediators of this discourse grandparents, parents and other elders or peers. Societal discourse defines the hetero relationship as the mainstay of society. The authority in societal discourse stems from the laws and regulations that control relationship practices.
Resumo:
The influence of 0.03 and 0.08 at. % Ag additions on the clustering of Zn atoms in an Al-4.4 at. % Zn alloy has been studied by resistometry. The effect of quenching and ageing temperatures shows that the ageing-ratio method of calculating the vacancy-solute atom binding energy is not applicable to these alloys. Zone-formation in Al-Zn is unaffected by Ag additions, but the zone-reversion process seems to be influenced. Apparent vacancy-formation energies in the binary and ternary alloys have been used to evaluate the v-Ag atom binding energy as 0.21 eV. It is proposed that, Ag and Zn being similar in size, the relative vacancy binding results from valency effects, and that in Al-Zn-Ag alloys clusters of Zn and Ag may form simultaneously, unaffected by the presence of each other. © 1970 Chapman and Hall Ltd.
Resumo:
Atmospheric particles affect the radiation balance of the Earth and thus the climate. New particle formation from nucleation has been observed in diverse atmospheric conditions but the actual formation path is still unknown. The prevailing conditions can be exploited to evaluate proposed formation mechanisms. This study aims to improve our understanding of new particle formation from the view of atmospheric conditions. The role of atmospheric conditions on particle formation was studied by atmospheric measurements, theoretical model simulations and simulations based on observations. Two separate column models were further developed for aerosol and chemical simulations. Model simulations allowed us to expand the study from local conditions to varying conditions in the atmospheric boundary layer, while the long-term measurements described especially characteristic mean conditions associated with new particle formation. The observations show statistically significant difference in meteorological and back-ground aerosol conditions between observed event and non-event days. New particle formation above boreal forest is associated with strong convective activity, low humidity and low condensation sink. The probability of a particle formation event is predicted by an equation formulated for upper boundary layer conditions. The model simulations call into question if kinetic sulphuric acid induced nucleation is the primary particle formation mechanism in the presence of organic vapours. Simultaneously the simulations show that ignoring spatial and temporal variation in new particle formation studies may lead to faulty conclusions. On the other hand, the theoretical simulations indicate that short-scale variations in temperature and humidity unlikely have a significant effect on mean binary water sulphuric acid nucleation rate. The study emphasizes the significance of mixing and fluxes in particle formation studies, especially in the atmospheric boundary layer. The further developed models allow extensive aerosol physical and chemical studies in the future.
Resumo:
A knowledge of the concentration distribution around a burning droplet is essential if accurate estimates are to be made of the transport coefficients in that region which influence the burning rate. There are two aspects of this paper; (1) determination of the concentration profiles, using the simple assumption of constant binary diffusion coefficients for all species, and comparison with experiments; and (2) postulation of a new relation for the therinal conductivity, which takes into account the variations of both temperature and concentrations of various species. First, the theoretical concentration profiles are evaluated and compared with experimental results reported elsewhere [5]. It is found that the agreement between the theory and experiment is fairly satisfactory. Then, by the use of these profiles and the relations proposed in the literature for the thermal conductivity of a mixture of nonpolar gases, a new relation for thermal conductivity: K = (A1 + B1 T) + (A2 + B2 T) xr (21). is suggested for analytical solutions of droplet combustion problems. Equations are presented to evaluate A1, A2, B1, and B2, and values of these terms for a few hydrocarbons are tabulated.
Resumo:
The aim of this report is to discuss the role of the relationship type and communication in two Finnish food chains, namely the pig meat-to-sausage (pig meat chain) and the cereal-to-rye bread (rye chain) chains. Furthermore, the objective is to examine those factors influencing the choice of a relationship type and the sustainability of a business relationship. Altogether 1808 questionnaires were sent to producers, processors and retailers operating in these two chains of which 224 usable questionnaires were returned (the response rate being 12.4%). The great majority of the respondents (98.7%) were small businesses employing less than 50 people. Almost 70 per cent of the respondents were farmers. In both chains, formal contracts were stated to be the most important relationship type used with business partners. Although for many businesses written contracts are a common business practice, the essential role of the contracts was the security they provide regarding the demand/supply and quality issues. Relative to the choice of the relationship types, the main difference between the two chains emerged especially with the prevalence of spot markets and financial participation arrangements. The usage of spot markets was significantly more common in the rye chain when compared to the pig meat chain, while, on the other hand, financial participation arrangements were much more common among the businesses in the pig meat chain than in the rye chain. Furthermore, the analysis showed that most of the businesses in the pig meat chain claimed not to be free to choose the relationship type they use. Especially membership in a co-operative and practices of a business partner were mentioned as the reasons limiting this freedom of choice. The main business relations in both chains were described as having a long-term orientation and being based on formal written contracts. Typical for the main business relationships was also that they are not based on the existence of the key persons only; the relationship would remain even if the key people left the business. The quality of these relationships was satisfactory in both chains and across all the stakeholder groups, though the downstream processors and the retailers had a slightly more positive view on their main business partners than the farmers and the upstream processors. The businesses operating in the pig meat chain seemed also to be more dependent on their main business relations when compared to the businesses in the rye chain. Although the communication means were rather similar in both chains (the phone being the most important), there was some variation between the chains concerning the communication frequency necessary to maintain the relationship with the main business partner. In short, the businesses in the pig meat chain seemed to appreciate more frequent communication with their main business partners when compared to the businesses in the rye chain. Personal meetings with the main business partners were quite rare in both chains. All the respondent groups were, however, fairly satisfied with the communication frequency and information quality between them and the main business partner. The business cultures could be argued to be rather hegemonic among the businesses both in the pig meat and rye chains. Avoidance of uncertainty, appreciation of long-term orientation and independence were considered important factors in the business cultures. Furthermore, trust, commitment and satisfaction in business partners were thought to be essential elements of business operations in all the respondent groups. In order to investigate which factors have an effect on the choice of a relationship type, several hypotheses were tested by using binary and multinomial logit analyses. According to these analyses it could be argued that avoidance of uncertainty and risk has a certain effect on the relationship type chosen, i.e. the willingness to avoid uncertainty increases the probability to choose stable relationships, like repeated market transactions and formal written contracts, but not necessary those, which require high financial commitment (like financial participation arrangements). The probability of engaging in financial participation arrangements seemed to increase with long-term orientation. The hypotheses concerning the sustainability of the economic relations were tested by using structural equation model (SEM). In the model, five variables were found to have a positive and statistically significant impact on the sustainable economic relationship construct. Ordered relative to their importance, those factors are: (i) communication quality, (ii) personal bonds, (iii) equal power distribution, (iv) local embeddedness and (v) competition.
Resumo:
Driven nonequilibrium structural phase transformation has been probed using time-varying resistance fluctuations or noise. We demonstrate that the non-Gaussian component (NGC) of noise obtained by evaluating the higher-order statistics of fluctuations, serves as a simple kinetic detector of these phase transitions. Using the Martensite transformation in free-standing wires of nickel-titanium binary alloys as a prototype, we observe clear deviations from the Gaussian background in the transformation zone, indicative of the long-range correlations in the system as the phase transforms. The viability of non-Gaussian statistics as a robust probe to structural phase transition was also confirmed by comparing the results from differential scanning calorimetry measurements. We further studied the response of the NGC to the modifications in the microstructure on repeated thermal cycling, as well as the variations in the temperature-drive rate, and explained the results using established simplistic models based on the different competing time scales. Our experiments (i) suggest an alternative method to estimate the transformation temperature scales with high accuracy and (ii) establish a connection between the material-specific evolution of microstructure to the statistics of its linear response. Since the method depends on an in-built long-range correlation during transformation, it could be portable to other structural transitions, as well as to materials of different physical origin and size.