933 resultados para Low Autocorrelation Binary Sequence Problem
Resumo:
Context. It appears that most (if not all) massive stars are born in multiple systems. At the same time, the most massive binaries are hard to find owing to their low numbers throughout the Galaxy and the implied large distances and extinctions. Aims. We want to study LS III +46 11, identified in this paper as a very massive binary; another nearby massive system, LS III +46 12; and the surrounding stellar cluster, Berkeley 90. Methods. Most of the data used in this paper are multi-epoch high S/N optical spectra, although we also use Lucky Imaging and archival photometry. The spectra are reduced with dedicated pipelines and processed with our own software, such as a spectroscopic-orbit code, CHORIZOS, and MGB. Results. LS III +46 11 is identified as a new very early O-type spectroscopic binary [O3.5 If* + O3.5 If*] and LS III +46 12 as another early O-type system [O4.5 V((f))]. We measure a 97.2-day period for LS III +46 11 and derive minimum masses of 38.80 ± 0.83 M⊙ and 35.60 ± 0.77 M⊙ for its two stars. We measure the extinction to both stars, estimate the distance, search for optical companions, and study the surrounding cluster. In doing so, a variable extinction is found as well as discrepant results for the distance. We discuss possible explanations and suggest that LS III +46 12 may be a hidden binary system where the companion is currently undetected.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Context. The eclipsing binary GU Mon is located in the star-forming cluster Dolidze 25, which has the lowest metallicity measured in a Milky Way young cluster. Aims. GU Mon has been identified as a short-period eclipsing binary with two early B-type components. We set out to derive its orbital and stellar parameters. Methods. We present a comprehensive analysis, including B and V light curves and 11 high-resolution spectra, to verify the orbital period and determine parameters. We used the stellar atmosphere code FASTWIND to obtain stellar parameters and create templates for cross-correlation. We obtained a model to fit the light and radial-velocity curves using the Wilson-Devinney code iteratively and simultaneously. Results. The two components of GU Mon are identical stars of spectral type B1 V with the same mass and temperature. The light curves are typical of an EW-type binary. The spectroscopic and photometric analyses agree on a period of 0.896640 ± 0.000007 d. We determine a mass of 9.0 ± 0.6 M⊙ for each component and for temperatures of 28 000 ± 2000 K. Both values are consistent with the spectral type. The two stars are overfilling their respective Roche lobes, sharing a common envelope and, therefore the orbit is synchronised and circularised. Conclusions. The GU Mon system has a fill-out factor above 0.8, containing two dwarf B-type stars on the main sequence. The two stars are in a very advanced stage of interaction, with their extreme physical similarity likely due to the common envelope. The expected evolution of such a system very probably leads to a merger while still on the main sequence.
Resumo:
The issue: The European Union's pre-crisis growth performance was disappointing enough, but the performance has been even more dismal since the onset of the crisis. Weak growth is undermining private and public deleveraging,and is fuelling continued banking fragility. Persistently high unemployment is eroding skills, discouraging labour market participation and undermining the EU’s long-term growth potential. Low overall growth is making it much tougher for the hard-hit economies in southern Europe to recover competitiveness and regain control of their public finances. Stagnation would reduce the attractiveness of Europe for investment. Under these conditions, Europe's social models are bound to prove unsustainable. Policy Challenge: The European Union's weak long-term growth potential and unsatisfactory recovery from the crisis represent a major policy challenge. Over and above the structural reform agenda, which vitally important, bold policy action is needed. The priority is to get bank credit going. Banking problems need to be assessed properly and bank resolution and recapitalisation should be pursued. Second, fostering the reallocation of factors to the most productive firms and the sectors that contribute to aggregate rebalancing is vital. Addressing intra-euro area competitiveness divergence is essential to support growth in southern Europe. Third, the speed of fiscal adjustment needs to be appropriate and EU funds should be front loaded to countries in deep recession, while the European Investment Bank should increase investment.
Resumo:
There is general consensus that to achieve employment growth, especially for vulnerable groups, it is not sufficient to simply kick-start economic growth: skills among both the high- and low-skilled population need to be improved. In particular, we argue that if the lack of graduates in science, technology, engineering and mathematics (STEM) is a true problem, it needs to be tackled via incentives and not simply via public campaigns: students are not enrolling in ‘hard-science’ subjects because the opportunity cost is very high. As far as the low-skilled population is concerned, we encourage EU and national policy-makers to invest in a more comprehensive view of this phenomenon. The ‘low-skilled’ label can hide a number of different scenarios: labour market detachment, migration, and obsolete skills that are the result of macroeconomic structural changes. For this reason lifelong learning is necessary to keep up with new technology and to shield workers from the risk of skills obsolescence and detachment from the labour market.
Resumo:
The geochemistry of an argillaceous rock sequence from a deep borehole in NE-Switzerland was investigated. The focus was to constrain the porewater chemistry in low permeability Jurassic rocks comprising the Liassic, the Opalinus Clay formation, the 'Brown Dogger' unit and the Effingen Member (Malm). A multi-method approach including mineralogical analysis, aqueous and Ni-ethylenediamine extraction, squeezing tests and pCO(2) measurements as well as geochemical modelling was applied for this purpose. A consistent dataset was obtained with regard to the main solutes in the porewaters. A fairly constant anion-accessible porosity of similar to 50% of the total porosity was deduced for all analysed samples which displayed variable clay-mineral contents. Sulphate concentrations were shown to be constrained by a sulphate-bearing phase, presumably by celestite or a Sr-Ba sulphate. Application of a simple equilibrium model, including cation exchange reactions, calcite and celestite equilibrium showed good agreement with squeezing data, indicating the suitability of the modelling approach to simulate porewater chemistry in the studied argillaceous rocks. The modelling highlighted the importance of correct determination of the exchangeable cation population. The analysis corroborates that squeezing of the studied rocks is a viable and efficient way to sample porewater.
Resumo:
Investigation of the Middle Miocene-Pleistocene succession in cores at ODP Site 817A (Leg 133), drilled on the slope south of the Queensland Plateau, identified the various material fluxes contributing to sedimentation and has determined thereby the paleogeographic events which occurred close to the studied area and influenced these fluxes. To determine proportions of platform origin and of plankton origin of carbonate mud, two reference sediments were collected: (1) back-reef carbonate mud from the Young Reef area (Great Barrier Reef); and (2) Late Miocene chalk from the Loyalty Basin, off New Caledonia. Through their biofacies and mineralogical and geochemical characters, these reference sediments were used to distinguish the proportions of platform and basin components in carbonate muds of 25 core samples from Hole 817A. Two "origin indexes" (i1 and i2) relate the proportion in platform and basin materials. The relative sedimentation rate is inferred from the high-frequency cycles determined by redox intervals in the cores. Bulk carbonate deposited in each core has been calculated in two ways with close results: (1) from calcimetric data available in the Leg 133 preliminary reports (Davies et al., 1991); and (2) from average magnetic susceptibility of cores, a value negatively correlated to the average carbonate content. Vertical changes in sedimentation rates, in carbonate content, in origin indexes and in "linear fluxes" document the evolution of sediment origins from platform carbonates, planktonic carbonates and insoluble material through time. These data are augmented with the variations in organic-matter content through the 817A succession. The observed changes and their interpretation are not modified by compaction, and are compatible with major paleogeographic events including drowning of the Queensland Plateau (Middle Miocene-Early Pliocene) and the renewal of shallow carbonate production, (1) during the Late Pliocene, and (2) from the Early Pleistocene. The birth and growth of the Great Barrier Reef is also recorded from 0.5 Ma by a strengthening of detrital carbonate deposition and possibly by a lack of clay minerals in the 4 upper cores, a response to trapping of terrigenous material behind this barrier. In addition, a maximum of biological silica production is displayed in the Middle Miocene. These changes constrain the time of events and the sequence-stratigraphy framework some components of which are transgression surface, maximum flooding surface and low-stand turbidites. Sedimentation rates and material fluxes show cycles lasting 1.75 Myr. Whatever their origin (climatic and/or eustatic) these cycles affected the planktonic production primarily. The changes also show that major carbonate variations in the deposits are due to a dilution effect by insoluble material (clay, biogenic silica and volcanic glasses) and that plankton productivity, controlling the major fraction of carbonate sedimentation, depends principally on terrigenous supplies, but also on deep-water upwelling. Accuracy of the method is reduced by redeposition, reworking, and probable occurrence of hiatuses.
Resumo:
This study evaluated the effectiveness of the Problem Solving For Life program as it universal approach to the prevention of adolescent depression. Short-term results indicated that participants with initially elevated depressions scores (high risk) who received the intervention showed a significantly greater decrease in depressive symptoms and increase in life problem-solving scores from pre- to postintervention compared with a high-risk control group. Low-risk participants who received the intervention reported a small but significant decrease in depression scores over the intervention period, whereas the low-risk controls reported an increase in depression scores. The low-risk group reported a significantly greater increase in problem-solving scores over the intervention period compared with low-risk controls. These results were not maintained, however, at 12-month follow-up.
Resumo:
Aim: The aim of this study was to characterize the bacterial community adhering to the mucosa of the terminal ileum, and proximal and distal colon of the human digestive tract. Methods and Results: Pinch samples of the terminal ileum, proximal and distal colon were taken from a healthy 35-year-old, and a 68-year-old subject with mild diverticulosis. The 16S rDNA genes were amplified using a low number of PCR cycles, cloned, and sequenced. In total, 361 sequences were obtained comprising 70 operational taxonomic units (OTU), with a calculated coverage of 82.6%. Twenty-three per cent of OTU were common to the terminal ileum, proximal colon and distal colon, but 14% OTU were only found in the terminal ileum, and 43% were only associated with the proximal or distal colon. The most frequently represented clones were from the Clostridium group XIVa (24.7%), and the Bacteroidetes (Cytophaga-Flavobacteria-Bacteroides ) cluster (27.7%). Conclusion: Comparison of 16S rDNA clone libraries of the hindgut across mammalian species confirms that the distribution of phylogenetic groups is similar irrespective of the host species. Lesser site-related differences within groups or clusters of organisms, are probable. Significance and Impact: This study provides further evidence of the distribution of the bacteria on the mucosal surfaces of the human hindgut. Data contribute to the benchmarking of the microbial composition of the human digestive tract.
Resumo:
Let e(1),e(2),... e(n) be a sequence of nonnegative integers Such that the first non-zero term is not one. Let Sigma(i=1)(n) e(i) = (q - 1)/2, where q = p(n) and p is an odd prime. We prove that the complete graph on q vertices can be decomposed into e(1) C-pn-factors, e(2) C-pn (1)-factors,..., and e(n) C-p-factors. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Urban encroachment on dense, coastal koala populations has ensured that their management has received increasing government and public attention. The recently developed National Koala Conservation Strategy calls for maintenance of viable populations in the wild. Yet the success of this, and other, conservation initiatives is hampered by lack of reliable and generally accepted national and regional population estimates. In this paper we address this problem in a potentially large, but poorly studied, regional population in the State that is likely to have the largest wild populations. We draw on findings from previous reports in this series and apply the faecal standing-crop method (FSCM) to derive a regional estimate of more than 59 000 individuals. Validation trials in riverine communities showed that estimates of animal density obtained from the FSCM and direct observation were in close agreement. Bootstrapping and Monte Carlo simulations were used to obtain variance estimates for our population estimates in different vegetation associations across the region. The most favoured habitat was riverine vegetation, which covered only 0.9% of the region but supported 45% of the koalas. We also estimated that between 1969 and 1995 similar to 30% of the native vegetation associations that are considered as potential koala habitat were cleared, leading to a decline of perhaps 10% in koala numbers. Management of this large regional population has significant implications for the national conservation of the species: the continued viability of this population is critically dependent on the retention and management of riverine and residual vegetation communities, and future vegetation-management guidelines should be cognisant of the potential impacts of clearing even small areas of critical habitat. We also highlight eight management implications.
Resumo:
Motivation: Targeting peptides direct nascent proteins to their specific subcellular compartment. Knowledge of targeting signals enables informed drug design and reliable annotation of gene products. However, due to the low similarity of such sequences and the dynamical nature of the sorting process, the computational prediction of subcellular localization of proteins is challenging. Results: We contrast the use of feed forward models as employed by the popular TargetP/SignalP predictors with a sequence-biased recurrent network model. The models are evaluated in terms of performance at the residue level and at the sequence level, and demonstrate that recurrent networks improve the overall prediction performance. Compared to the original results reported for TargetP, an ensemble of the tested models increases the accuracy by 6 and 5% on non-plant and plant data, respectively.
Resumo:
Selection of machine learning techniques requires a certain sensitivity to the requirements of the problem. In particular, the problem can be made more tractable by deliberately using algorithms that are biased toward solutions of the requisite kind. In this paper, we argue that recurrent neural networks have a natural bias toward a problem domain of which biological sequence analysis tasks are a subset. We use experiments with synthetic data to illustrate this bias. We then demonstrate that this bias can be exploitable using a data set of protein sequences containing several classes of subcellular localization targeting peptides. The results show that, compared with feed forward, recurrent neural networks will generally perform better on sequence analysis tasks. Furthermore, as the patterns within the sequence become more ambiguous, the choice of specific recurrent architecture becomes more critical.
Resumo:
With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.
Resumo:
We consider a buying-selling problem when two stops of a sequence of independent random variables are required. An optimal stopping rule and the value of a game are obtained.