862 resultados para Low Autocorrelation Binary Sequence Problem
Resumo:
Knowledge of the adsorption behavior of coal-bed gases, mainly under supercritical high-pressure conditions, is important for optimum design of production processes to recover coal-bed methane and to sequester CO2 in coal-beds. Here, we compare the two most rigorous adsorption methods based on the statistical mechanics approach, which are Density Functional Theory (DFT) and Grand Canonical Monte Carlo (GCMC) simulation, for single and binary mixtures of methane and carbon dioxide in slit-shaped pores ranging from around 0.75 to 7.5 nm in width, for pressure up to 300 bar, and temperature range of 308-348 K, as a preliminary study for the CO2 sequestration problem. For single component adsorption, the isotherms generated by DFT, especially for CO2, do not match well with GCMC calculation, and simulation is subsequently pursued here to investigate the binary mixture adsorption. For binary adsorption, upon increase of pressure, the selectivity of carbon dioxide relative to methane in a binary mixture initially increases to a maximum value, and subsequently drops before attaining a constant value at pressures higher than 300 bar. While the selectivity increases with temperature in the initial pressure-sensitive region, the constant high-pressure value is also temperature independent. Optimum selectivity at any temperature is attained at a pressure of 90-100 bar at low bulk mole fraction of CO2, decreasing to approximately 35 bar at high bulk mole fractions. (c) 2005 American Institute of Chemical Engineers.
Resumo:
We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate R=14, an RS critical transition point at pc 0.67 while the critical RSB transition point is located at pc 0.7450±0.0050, to be compared with the corresponding Shannon bound 1-R. For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed. © 2006 The American Physical Society.
Resumo:
2000 Mathematics Subject Classification: 34L40, 65L10, 65Z05, 81Q20.
Resumo:
Context: Model atmosphere analyses have been previously undertaken for both Galactic and extragalactic B-type supergiants. By contrast, little attention has been given to a comparison of the properties of single supergiants and those that are members of multiple systems.
Aims: Atmospheric parameters and nitrogen abundances have been estimated for all the B-type supergiants identified in the VLT-FLAMES Tarantula survey. These include both single targets and binary candidates. The results have been analysed to investigate the role of binarity in the evolutionary history of supergiants.
Methods: tlusty non-local thermodynamic equilibrium (LTE) model atmosphere calculations have been used to determine atmospheric parameters and nitrogen abundances for 34 single and 18 binary supergiants. Effective temperatures were deduced using the silicon balance technique, complemented by the helium ionisation in the hotter spectra. Surface gravities were estimated using Balmer line profiles and microturbulent velocities deduced using the silicon spectrum. Nitrogen abundances or upper limits were estimated from the Nii spectrum. The effects of a flux contribution from an unseen secondary were considered for the binary sample. Results. We present the first systematic study of the incidence of binarity for a sample of B-type supergiants across the theoretical terminal age main sequence (TAMS). To account for the distribution of effective temperatures of the B-type supergiants it may be necessary to extend the TAMS to lower temperatures. This is also consistent with the derived distribution of mass discrepancies, projected rotational velocities and nitrogen abundances, provided that stars cooler than this temperature are post-red supergiant objects. For all the supergiants in the Tarantula and in a previous FLAMES survey, the majority have small projected rotational velocities. The distribution peaks at about 50 km s-1 with 65% in the range 30 km s-1 ≤ νe sin i ≤ 60 km s-1. About ten per cent have larger ve sin i (≥100 km s-1), but surprisingly these show little or no nitrogen enhancement. All the cooler supergiants have low projected rotational velocities of ≤70 km s-1 and high nitrogen abundance estimates, implying that either bi-stability braking or evolution on a blue loop may be important. Additionally, there is a lack of cooler binaries, possibly reflecting the small sample sizes. Single-star evolutionary models, which include rotation, can account for all of the nitrogen enhancement in both the single and binary samples. The detailed distribution of nitrogen abundances in the single and binary samples may be different, possibly reflecting differences in their evolutionary history.
Conclusions: The first comparative study of single and binary B-type supergiants has revealed that the main sequence may be significantly wider than previously assumed, extending to Teff = 20 000 K. Some marginal differences in single and binary atmospheric parameters and abundances have been identified, possibly implying non-standard evolution for some of the sample. This sample as a whole has implications for several aspects of our understanding of the evolutionary status of blue supergiants.
Resumo:
Rotation is a key parameter in the evolution of massive stars, affecting their evolution, chemical yields, ionizing photon budget, and final fate. We determined the projected rotational velocity, υ e sin i, of ~330 O-type objects, i.e. ~210 spectroscopic single stars and ~110 primaries in binary systems, in the Tarantula nebula or 30 Doradus (30 Dor) region. The observations were taken using VLT/FLAMES and constitute the largest homogeneous dataset of multi-epoch spectroscopy of O-type stars currently available. The most distinctive feature of the υ e sin i distributions of the presumed-single stars and primaries in 30 Dor is a low-velocity peak at around 100 km s-1. Stellar winds are not expected to have spun-down the bulk of the stars significantly since their arrival on the main sequence and therefore the peak in the single star sample is likely to represent the outcome of the formation process. Whereas the spin distribution of presumed-single stars shows a well developed tail of stars rotating more rapidly than 300 km s-1, the sample of primaries does not feature such a high-velocity tail. The tail of the presumed-single star distribution is attributed for the most part - and could potentially be completely due - to spun-up binary products that appear as single stars or that have merged. This would be consistent with the lack of such post-interaction products in the binary sample, that is expected to be dominated by pre-interaction systems. The peak in this distribution is broader and is shifted toward somewhat higher spin rates compared to the distribution of presumed-single stars. Systems displaying large radial velocity variations, typical for short period systems, appear mostly responsible for these differences.
Resumo:
Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.
Resumo:
Sequence problems belong to the most challenging interdisciplinary topics of the actuality. They are ubiquitous in science and daily life and occur, for example, in form of DNA sequences encoding all information of an organism, as a text (natural or formal) or in form of a computer program. Therefore, sequence problems occur in many variations in computational biology (drug development), coding theory, data compression, quantitative and computational linguistics (e.g. machine translation). In recent years appeared some proposals to formulate sequence problems like the closest string problem (CSP) and the farthest string problem (FSP) as an Integer Linear Programming Problem (ILPP). In the present talk we present a general novel approach to reduce the size of the ILPP by grouping isomorphous columns of the string matrix together. The approach is of practical use, since the solution of sequence problems is very time consuming, in particular when the sequences are long.
Resumo:
This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.
Resumo:
The provision of accessible and cost-effective treatment to a large number of problem drinkers is a significant challenge to health services. Previous data suggest that a correspondence intervention may assist in these efforts. We recruited 277 people with alcohol abuse problems and randomly allocated them to immediate cognitive behavioral treatment by correspondence (ICBT), 2 months in a waiting list (WL2-CBT), self-monitoring (SM2-CBT), or extended self-monitoring (SM6-CBT). Everyone received correspondence CBT after the control period. Over 2 months later, no drop in alcohol intake occurred in the waiting list, and CBT had a greater impact than SM. No further gains from SM were seen after 2 months. Effects of CBT were well maintained and were equivalent, whether it was received immediately or after 2 to 6 months of self-monitoring. Weekly alcohol intake fell 48% from pretreatment to 18.6 alcohol units at 12 months. Our results confirmed that correspondence CBT for alcohol abuse was accessible and effective for people with low physical dependence.
Resumo:
The purpose of this paper is to determine the prevalence of the toxic shock toxin gene (tst) and to enumerate the circulating strains of methicillin-sensitive Staphylococcus aureus (MSSA) and methicillin-resistant S. aureus (MRSA) in Australian isolates collected over two decades. The aim was to subtype these strains using the binary genes pvl, cna, sdrE, pUB110 and pT181. Isolates were assayed using real-time polymerase chain reaction (PCR) for mecA, nuc, 16 S rRNA, eight single-nucleotide polymorphisms (SNPs) and for five binary genes. Two realtime PCR assays were developed for tst. The 90 MRSA isolates belonged to CC239 (39 in 1989, 38 in 1996 and ten in 2003), CC1 (two in 2003) and CC22 (one in 2003). The majority of the 210 MSSA isolates belonged to CC1 (26), CC5 (24) and CC78 (23). Only 18 isolates were tst-positive and only 15 were pvl-positive. Nine MSSA isolates belonged to five binary types of ST93, including two pvlpositive types. The proportion of tst-positive and pvl-positive isolates was low and no significant increase was demonstrated. Dominant MSSA clonal complexes were similar to those seen elsewhere, with the exception of CC78. CC239 MRSA (AUS-2/3) was the predominant MRSA but decreased significantly in prevalence, while CC22 (EMRSA-15) and CC1 (WA-1) emerged. Genetically diverse ST93 MSSA predated the emergence of ST93- MRSA (the Queensland clone).
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
The Sascha-Pelligrini low-sulphidation epithermal system is located on the western edge of the Deseado Massif, Santa Cruz Province, Argentina. Outcrop sampling has returned values of up to 160g/t gold and 796g/t silver, with Mirasol Resources and Coeur D.Alene Mines currently exploring the property. Detailed mapping of the volcanic stratigraphy has defined three units that comprise the middle Jurassic Chon Aike Formation and two units that comprise the upper Jurassic La Matilde Formation. The Chon Aike Formation consists of rhyodacite ignimbrites and tuffs, with the La Matilde Formation including rhyolite ash and lithic tuffs. The volcanic sequence is intruded by a large flow-banded rhyolite dome, with small, spatially restricted granodiorite dykes and sills cropping out across the study area. ASTER multispectral mineral mapping, combined with PIMA (Portable Infrared Mineral Analyser) and XRD (X-ray diffraction) analysis defines an alteration pattern that zones from laumontite-montmorillonite, to illite-pyritechlorite, followed by a quartz-illite-smectite-pyrite-adularia vein selvage. Supergene kaolinite and steam-heated acid-sulphate kaolinite-alunite-opal alteration horizons crop out along the Sascha Vein trend and Pelligrini respectively. Paragenetically, epithermal veining varies from chalcedonic to saccharoidal with minor bladed textures, colloform/crustiform-banded with visible electrum and acanthite, crustiform-banded grey chalcedonic to jasperoidal with fine pyrite, and crystalline comb quartz. Geothermometry of mineralised veins constrains formation temperatures from 174.8 to 205.1¡ÆC and correlates with the stability field for the interstratified illite-smectite vein selvage. Vein morphology, mineralogy and associated alteration are controlled by host rock rheology, permeability, and depth of the palaeo-water table. Mineralisation within ginguro banded veins resulted from fluctuating fluid pH associated with selenide-rich magmatic pulses, pressure release boiling and wall-rock silicate buffering. The study of the Sascha-Pelligrini epithermal system will form the basis for a deposit-specific model helping to clarify the current understanding of epithermal deposits, and may serve as a template for exploration of similar epithermal deposits throughout Santa Cruz.
Resumo:
For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.
Resumo:
Introduction: There are many low intensity (LI) cognitive behavoural therapy (CBT) solutions to the problem of limited service access. In this chapter, we aim to discuss a relatively low-technology approach to access using standard postal services-CBT by mail, or M-CBT. Bibliotherapies including M-CBT teach key concepts and self-management techniques, together with screening tools and forms to structure home practice. M-CBT differs from other bibliotherapies by segmenting interventions and mailing them at regular intervals. Most involve participants returning copies of monitoring forms or completed handouts. Therapist feedback is provided, often in personal letters that accompany the printed materials. Participants may also be given access to telephone or email support. ----- ----- M-CBT clearly fulfills criteria for an LI CBT (see Bennett-Levy et al., Chapter 1, for a definition of LI interventions). Once written, they involve little therapist time and rely heavily on self-management. However, content and overall treatment duration need not be compromised. Long-term interventions with multiple components can be delivered via this method, provided their content can be communicated in letters and engagement is maintained.