44 resultados para EXPLOITING MULTICOMMUTATION
Resumo:
Defocus blur is an indicator for the depth structure of a scene. However, given a single input image from a conventional camera one cannot distinguish between blurred objects lying in front or behind the focal plane, as they may be subject to exactly the same amount of blur. In this paper we address this limitation by exploiting coded apertures. Previous work in this area focuses on setups where the scene is placed either entirely in front or entirely behind the focal plane. We demonstrate that asymmetric apertures result in unique blurs for all distances from the camera. To exploit asymmetric apertures we propose an algorithm that can unambiguously estimate scene depth and texture from a single input image. One of the main advantages of our method is that, within the same depth range, we can work with less blurred data than in other methods. The technique is tested on both synthetic and real images.
Resumo:
We present a novel approach to the reconstruction of depth from light field data. Our method uses dictionary representations and group sparsity constraints to derive a convex formulation. Although our solution results in an increase of the problem dimensionality, we keep numerical complexity at bay by restricting the space of solutions and by exploiting an efficient Primal-Dual formulation. Comparisons with state of the art techniques, on both synthetic and real data, show promising performances.
Resumo:
In this paper we propose a new fully-automatic method for localizing and segmenting 3D intervertebral discs from MR images, where the two problems are solved in a unified data-driven regression and classification framework. We estimate the output (image displacements for localization, or fg/bg labels for segmentation) of image points by exploiting both training data and geometric constraints simultaneously. The problem is formulated in a unified objective function which is then solved globally and efficiently. We validate our method on MR images of 25 patients. Taking manually labeled data as the ground truth, our method achieves a mean localization error of 1.3 mm, a mean Dice metric of 87%, and a mean surface distance of 1.3 mm. Our method can be applied to other localization and segmentation tasks.
Resumo:
We present the first results of searches for axions and axionlike particles with the XENON100 experiment. The axion-electron coupling constant, g Ae , has been probed by exploiting the axioelectric effect in liquid xenon. A profile likelihood analysis of 224.6 live days × 34-kg exposure has shown no evidence for a signal. By rejecting g Ae larger than 7.7×10 −12 (90% C.L.) in the solar axion search, we set the best limit to date on this coupling. In the frame of the DFSZ and KSVZ models, we exclude QCD axions heavier than 0.3 and 80 eV/c 2 , respectively. For axionlike particles, under the assumption that they constitute the whole abundance of dark matter in our galaxy, we constrain g Ae to be lower than 1×10 −12 (90% C.L.) for masses between 5 and 10 keV/c 2 .
Resumo:
Experience is lacking with mineral scaling and corrosion in enhanced geothermal systems (EGS) in which surface water is circulated through hydraulically stimulated crystalline rocks. As an aid in designing EGS projects we have conducted multicomponent reactive-transport simulations to predict the likely characteristics of scales and corrosion that may form when exploiting heat from granitoid reservoir rocks at ∼200 °C and 5 km depth. The specifications of an EGS project at Basel, Switzerland, are used to constrain the model. The main water–rock reactions in the reservoir during hydraulic stimulation and the subsequent doublet operation were identified in a separate paper (Alt-Epping et al., 2013b). Here we use the computed composition of the reservoir fluid to (1) predict mineral scaling in the injection and production wells, (2) evaluate methods of chemical geothermometry and (3) identify geochemical indicators of incipient corrosion. The envisaged heat extraction scheme ensures that even if the reservoir fluid is in equilibrium with quartz, cooling of the fluid will not induce saturation with respect to amorphous silica, thus eliminating the risk of silica scaling. However, the ascending fluid attains saturation with respect to crystalline aluminosilicates such as albite, microcline and chlorite, and possibly with respect to amorphous aluminosilicates. If no silica-bearing minerals precipitate upon ascent, reservoir temperatures can be predicted by classical formulations of silica geothermometry. In contrast, Na/K concentration ratios in the production fluid reflect steady-state conditions in the reservoir rather than albite–microcline equilibrium. Thus, even though igneous orthoclase is abundant in the reservoir and albite precipitates as a secondary phase, Na/K geothermometers fail to yield accurate temperatures. Anhydrite, which is present in fractures in the Basel reservoir, is predicted to dissolve during operation. This may lead to precipitation of pyrite and, at high exposure of anhydrite to the circulating fluid, of hematite scaling in the geothermal installation. In general, incipient corrosion of the casing can be detected at the production wellhead through an increase in H2(aq) and the enhanced precipitation of Fe-bearing aluminosilicates. The appearance of magnetite in scales indicates high corrosion rates.
Resumo:
Paper I: Corporate aging and internal resource allocation Abstract Various observers argue that established firms are at a disadvantage in pursuing new growth opportunities. In this paper, we provide systematic evidence that established firms allocate fewer resources to high-growth lines of business. However, we find no evidence of inefficient resource allocation in established firms. Redirecting resources from high-growth to low-growth lines of business does not result in lower profitability. Also, resource allocation towards new growth opportunities does not increase when managers of established firms are exposed to takeover and product market threats. Rather, it seems that conservative resource allocation strategies are driven by pressures to meet investors’ expectations. Our empirical evidence, thus, favors the hypothesis that established firms wisely choose to allocate fewer resources to new growth opportunities as external pressures force them to focus on efficiency rather than novelty (Holmström 1989). Paper II: Corporate aging and asset sales Abstract This paper asks whether divestitures are motivated by strategic considerations about the scope of the firm’s activities. Limited managerial capacity implies that exploiting core competences becomes comparatively more attractive than exploring new growth opportunities as firms mature. Divestitures help stablished firms free management time and increase the focus on core competences. The testable implication of this attention hypothesis is that established firms are the main sellers of assets, that their divestiture activity increases when managerial capacity is scarcer, that they sell non-core activities, and that they return the divestiture proceeds to the providers of capital instead of reinvesting them in the firm. We find strong empirical support for these predictions. Paper III: Corporate aging and lobbying expenditures Abstract Creative destruction forces constantly challenge established firms, especially in competitive markets. This paper asks whether corporate lobbying is a competitive weapon of established firms to counteract the decline in rents over time. We find a statistically and economically significant positive relation between firm age and lobbying expenditures. Moreover, the documented age-effect is weaker when firms have unique products or operate in concentrated product markets. To address endogeneity, we use industry distress as an exogenous nonlegislative shock to future rents and show that established firms are relatively more likely to lobby when in distress. Finally, we provide empirical evidence that corporate lobbying efforts by established firms forestall the creative destruction process. In sum, our findings suggest that corporate lobbying is a competitive weapon of established firms to retain profitability in competitive environments.
Resumo:
Well-established methods exist for measuring party positions, but reliable means for estimating intra-party preferences remain underdeveloped. While most efforts focus on estimating the ideal points of individual legislators based on inductive scaling of roll call votes, this data suffers from two problems: selection bias due to unrecorded votes and strong party discipline, which tends to make voting a strategic rather than a sincere indication of preferences. By contrast, legislative speeches are relatively unconstrained, as party leaders are less likely to punish MPs for speaking freely as long as they vote with the party line. Yet, the differences between roll call estimations and text scalings remain essentially unexplored, despite the growing application of statistical analysis of textual data to measure policy preferences. Our paper addresses this lacuna by exploiting a rich feature of the Swiss legislature: on most bills, legislators both vote and speak many times. Using this data, we compare text-based scaling of ideal points to vote-based scaling from a crucial piece of energy legislation. Our findings confirm that text scalings reveal larger intra-party differences than roll calls. Using regression models, we further explain the differences between roll call and text scalings by attributing differences to constituency-level preferences for energy policy.
Resumo:
One of the simplest questions that can be asked about molecular diversity is how many organic molecules are possible in total? To answer this question, my research group has computationally enumerated all possible organic molecules up to a certain size to gain an unbiased insight into the entire chemical space. Our latest database, GDB-17, contains 166.4 billion molecules of up to 17 atoms of C, N, O, S, and halogens, by far the largest small molecule database reported to date. Molecules allowed by valency rules but unstable or nonsynthesizable due to strained topologies or reactive functional groups were not considered, which reduced the enumeration by at least 10 orders of magnitude and was essential to arrive at a manageable database size. Despite these restrictions, GDB-17 is highly relevant with respect to known molecules. Beyond enumeration, understanding and exploiting GDBs (generated databases) led us to develop methods for virtual screening and visualization of very large databases in the form of a “periodic system of molecules” comprising six different fingerprint spaces, with web-browsers for nearest neighbor searches, and the MQN- and SMIfp-Mapplet application for exploring color-coded principal component maps of GDB and other large databases. Proof-of-concept applications of GDB for drug discovery were realized by combining virtual screening with chemical synthesis and activity testing for neurotransmitter receptor and transporter ligands. One surprising lesson from using GDB for drug analog searches is the incredible depth of chemical space, that is, the fact that millions of very close analogs of any molecule can be readily identified by nearest-neighbor searches in the MQN-space of the various GDBs. The chemical space project has opened an unprecedented door on chemical diversity. Ongoing and yet unmet challenges concern enumerating molecules beyond 17 atoms and synthesizing GDB molecules with innovative scaffolds and pharmacophores.
Resumo:
Current methods for detection of copy number variants (CNV) and aberrations (CNA) from targeted sequencing data are based on the depth of coverage of captured exons. Accurate CNA determination is complicated by uneven genomic distribution and non-uniform capture efficiency of targeted exons. Here we present CopywriteR, which eludes these problems by exploiting 'off-target' sequence reads. CopywriteR allows for extracting uniformly distributed copy number information, can be used without reference, and can be applied to sequencing data obtained from various techniques including chromatin immunoprecipitation and target enrichment on small gene panels. CopywriteR outperforms existing methods and constitutes a widely applicable alternative to available tools.
Resumo:
The increasing interest in autonomous coordinated driving and in proactive safety services, exploiting the wealth of sensing and computing resources which are gradually permeating the urban and vehicular environments, is making provisioning of high levels of QoS in vehicular networks an urgent issue. At the same time, the spreading model of a smart car, with a wealth of infotainment applications, calls for architectures for vehicular communications capable of supporting traffic with a diverse set of performance requirements. So far efforts focused on enabling a single specific QoS level. But the issues of how to support traffic with tight QoS requirements (no packet loss, and delays inferior to 1ms), and of designing a system capable at the same time of efficiently sustaining such traffic together with traffic from infotainment applications, are still open. In this paper we present the approach taken by the CONTACT project to tackle these issues. The goal of the project is to investigate how a VANET architecture, which integrates content-centric networking, software-defined networking, and context aware floating content schemes, can properly support the very diverse set of applications and services currently envisioned for the vehicular environment.
Resumo:
Aim The usual hypothesis about the relationship between niche breadth and range size posits that species with the capacity to use a wider range of resources or to tolerate a greater range of environmental conditions should be more widespread. In plants, broader niches are often hypothesized to be due to pronounced phenotypic plasticity, and more plastic species are therefore predicted to be more common. We examined the relationship between the magnitude of phenotypic plasticity in five functional traits, mainly related to leaves, and several measures of abundance in 105 Central European grassland species. We further tested whether mean values of traits, rather than their plasticity, better explain the commonness of species, possibly because they are pre-adapted to exploiting the most common resources. Location Central Europe. Methods In a multispecies experiment with 105 species we measured leaf thickness, leaf greenness, specific leaf area, leaf dry matter content and plant height, and the plasticity of these traits in response to fertilization, waterlogging and shading. For the same species we also obtained five measures of commonness, ranging from plot-level abundance to range size in Europe. We then examined whether these measures of commonness were associated with the magnitude of phenotypic plasticity, expressed as composite plasticity of all traits across the experimental treatments. We further estimated the relative importance of trait plasticity and trait means for abundance and geographical range size. Results More abundant species were less plastic. This negative relationship was fairly consistent across several spatial scales of commonness, but it was weak. Indeed, compared with trait means, plasticity was relatively unimportant for explaining differences in species commonness. Main conclusions Our results do not indicate that larger phenotypic plasticity of leaf morphological traits enhances species abundance. Furthermore, possession of a particular trait value, rather than of trait plasticity, is a more important determinant of species commonness.
Genomic amplification of the caprine EDNRA locus might lead to a dose dependent loss of pigmentation
Resumo:
The South African Boer goat displays a characteristic white spotting phenotype, in which the pigment is limited to the head. Exploiting the existing phenotype variation within the breed, we mapped the locus causing this white spotting phenotype to chromosome 17 by genome wide association. Subsequent whole genome sequencing identified a 1 Mb copy number variant (CNV) harboring 5 genes including EDNRA. The analysis of 358 Boer goats revealed 3 alleles with one, two, and three copies of this CNV. The copy number is correlated with the degree of white spotting in goats. We propose a hypothesis that ectopic overexpression of a mutant EDNRA scavenges EDN3 required for EDNRB signaling and normal melanocyte development and thus likely lead to an absence of melanocytes in the non-pigmented body areas of Boer goats. Our findings demonstrate the value of domestic animals as reservoir of unique mutants and for identifying a precisely defined functional CNV.
Resumo:
The utility of the HMBC experiment for structure elucidation is unquestionable, but the nature of the coupling pathways leading to correlations in an HMBC experiment creates the potential for misinterpretation. This misinterpretation potential is intimately linked to the size of the long-range heteronuclear couplings involved, and may become troublesome in those cases of a particularly strong 2JCH correlation that might be mistaken for a 3JCH correlation or a 4JCH correlation of appreciable strength that could be mistaken for a weaker 3JCH correlation. To address these potential avenues of confusion, work from several laboratories has been focused on the development of what might be considered “coupling pathway edited” long-range heteronuclear correlation experiments that are derived from or related to the HMBC experiment. The first example of an effort to address the problems associated with correlation path length was seen in the heteronucleus-detected XCORFE experiment described by Reynolds and co-workers that predated the development of the HMBC experiment. Proton-detected analogs of the HMBC experiment intended to differentiate 2JCH correlations from nJCH correlations where n = 3, 4, include the 2J,3J-HMBC, HMBC-RELAY, H2BC, edited-HMBC, and HAT H2BC experiments. The principles underlying the critical components of each of these experiments are discussed and experimental verification of the results that can be obtained using model compounds are shown. This contribution concludes with a brief discussion of the 1,1-ADEQUATE experiments that provide an alternative means of identifying adjacent protonated and non-protonated carbon correlations by exploiting 1JCC correlations at natural abundance.
Resumo:
Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings.