96 resultados para Artificial source
Resumo:
In this paper we present a method for blind deconvolution of linear channels based on source separation techniques, for real word signals. This technique applied to blind deconvolution problems is based in exploiting not the spatial independence between signals but the temporal independence between samples of the signal. Our objective is to minimize the mutual information between samples of the output in order to retrieve the original signal. In order to make use of use this idea the input signal must be a non-Gaussian i.i.d. signal. Because most real world signals do not have this i.i.d. nature, we will need to preprocess the original signal before the transmission into the channel. Likewise we should assure that the transmitted signal has non-Gaussian statistics in order to achieve the correct function of the algorithm. The strategy used for this preprocessing will be presented in this paper. If the receiver has the inverse of the preprocess, the original signal can be reconstructed without the convolutive distortion.
Resumo:
The incorporation of space allows the establishment of a more precise relationship between a contaminating input, a contaminating byproduct and emissions that reach the final receptor. However, the presence of asymmetric information impedes the implementation of the first-best policy. As a solution to this problem a site specific deposit refund system for the contaminating input and the contaminating byproduct are proposed. Moreover, the utilization of a successive optimization technique first over space and second over time enables definition of the optimal intertemporal site specific deposit refund system
Resumo:
En el presente artículo se ha desarrollado un sistema capaz de categorizar de forma automática la base de datos de imágenes que sirven de punto de partida para la ideación y diseño en la producción artística del escultor M. Planas. La metodología utilizada está basada en características locales. Para la construcción de un vocabulario visual se sigue un procedimiento análogo al que se utiliza en el análisis automático de textos (modelo 'Bag-of-Words'-BOW) y en el ámbito de las imágenes nos referiremos a representaciones 'Bag-of-Visual Terms' (BOV). En este enfoque se analizan las imágenes como un conjunto de regiones, describiendo solamente su apariencia e ignorando su estructura espacial. Para superar los inconvenientes de polisemia y sinonimia que lleva asociados esta metodología, se utiliza el análisis probabilístico de aspectos latentes (PLSA) que detecta aspectos subyacentes en las imágenes, patrones formales. Los resultados obtenidos son prometedores y, además de la utilidad intrínseca de la categorización automática de imágenes, este método puede proporcionar al artista un punto de vista auxiliar muy interesante.
Resumo:
Several methods and approaches for measuring parameters to determine fecal sources of pollution in water have been developed in recent years. No single microbial or chemical parameter has proved sufficient to determine the source of fecal pollution. Combinations of parameters involving at least one discriminating indicator and one universal fecal indicator offer the most promising solutions for qualitative and quantitative analyses. The universal (nondiscriminating) fecal indicator provides quantitative information regarding the fecal load. The discriminating indicator contributes to the identification of a specific source. The relative values of the parameters derived from both kinds of indicators could provide information regarding the contribution to the total fecal load from each origin. It is also essential that both parameters characteristically persist in the environment for similar periods. Numerical analysis, such as inductive learning methods, could be used to select the most suitable and the lowest number of parameters to develop predictive models. These combinations of parameters provide information on factors affecting the models, such as dilution, specific types of animal source, persistence of microbial tracers, and complex mixtures from different sources. The combined use of the enumeration of somatic coliphages and the enumeration of Bacteroides-phages using different host specific strains (one from humans and another from pigs), both selected using the suggested approach, provides a feasible model for quantitative and qualitative analyses of fecal source identification.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
Organizations across the globe are creating and distributing products that include open source software. To ensure compliance with the open source licenses, each company needs to evaluate exactly what open source licenses and copyrights are included - resulting in duplicated effort and redundancy. This talk will provide an overview of a new Software Package Data Exchange (SPDX) specification. This specification will provide a common format to share information about the open source licenses and copyrights that are included in any software package, with the goal of saving time and improving data accuracy. This talk will review the progress of the initiative; discuss the benefits to organizations using open source and share information on how you can contribute.
Resumo:
The use of open source software continues to grow on a daily basis. Today, enterprise applications contain 40% to 70% open source code and this fact has legal, development, IT security, risk management and compliance organizations focusing their attention on its use, as never before. They increasingly understand that the open source content within an application must be detected. Once uncovered, decisions regarding compliance with intellectual property licensing obligations must be made and known security vulnerabilities must be remediated. It is no longer sufficient from a risk perspective to not address both open source issues.
Resumo:
The Free Open Source Software (FOSS) seem far from the military field but in some cases, some technologies normally used for civilian purposes may have military applications. These products and technologies are called dual-use. Can we manage to combine FOSS and dual-use products? On one hand, we have to admit that this kind of association exists - dual-use software can be FOSS and many examples demonstrate this duality - but on the other hand, dual-use software available under free licenses lead us to ask many questions. For example, the dual-use export control laws aimed at stemming the proliferation of weapons of mass destruction. Dual-use export in United States (ITAR) and Europe (regulation 428/2009) implies as a consequence the prohibition or regulation of software exportation, involving the closing of source code. Therefore, the issues of exported softwares released under free licenses arises. If software are dual-use goods and serve for military purposes, they may represent a danger. By the rights granted to licenses to run, study, redistribute and distribute modified versions of the software, anyone can access the free dual-use software. So, the licenses themselves are not at the origin of the risk, it is actually linked to the facilitated access to source codes. Seen from this point of view, it goes against the dual-use regulation which allows states to control these technologies exportation. For this analysis, we will discuss about various legal questions and draft answers from either licenses or public policies in this respect.
Resumo:
Softcatalà is a non-profit associationcreated more than 10 years ago to fightthe marginalisation of the Catalan languagein information and communicationtechnologies. It has led the localisationof many applications and thecreation of a website which allows itsusers to translate texts between Spanishand Catalan using an external closed-sourcetranslation engine. Recently,the closed-source translation back-endhas been replaced by a free/open-sourcesolution completely managed by Softcatalà: the Apertium machine translationplatform and the ScaleMT web serviceframework. Thanks to the opennessof the new solution, it is possibleto take advantage of the huge amount ofusers of the Softcatalà translation serviceto improve it, using a series ofmethods presented in this paper. In addition,a study of the translations requestedby the users has been carriedout, and it shows that the translationback-end change has not affected theusage patterns.
Resumo:
Sugar beet (Beta vulgaris ssp. vulgaris) is an important crop of temperate climates which provides nearly 30% of the world's annual sugar production and is a source for bioethanol and animal feed. The species belongs to the order of Caryophylalles, is diploid with 2n = 18 chromosomes, has an estimated genome size of 714-758 megabases and shares an ancient genome triplication with other eudicot plants. Leafy beets have been cultivated since Roman times, but sugar beet is one of the most recently domesticated crops. It arose in the late eighteenth century when lines accumulating sugar in the storage root were selected from crosses made with chard and fodder beet. Here we present a reference genome sequence for sugar beet as the first non-rosid, non-asterid eudicot genome, advancing comparative genomics and phylogenetic reconstructions. The genome sequence comprises 567 megabases, of which 85% could be assigned to chromosomes. The assembly covers a large proportion of the repetitive sequence content that was estimated to be 63%. We predicted 27,421 protein-coding genes supported by transcript data and annotated them on the basis of sequence homology. Phylogenetic analyses provided evidence for the separation of Caryophyllales before the split of asterids and rosids, and revealed lineage-specific gene family expansions and losses. We sequenced spinach (Spinacia oleracea), another Caryophyllales species, and validated features that separate this clade from rosids and asterids. Intraspecific genomic variation was analysed based on the genome sequences of sea beet (Beta vulgaris ssp. maritima; progenitor of all beet crops) and four additional sugar beet accessions. We identified seven million variant positions in the reference genome, and also large regions of low variability, indicating artificial selection. The sugar beet genome sequence enables the identification of genes affecting agronomically relevant traits, supports molecular breeding and maximizes the plant's potential in energy biotechnology.
Resumo:
In addition to the two languages essentially involved in translation, that of the source text (L1) and that of the target text (L2), we propose a third language (L3) to refer to any other language(s) found in the text. L3 may appear in the source text (ST) or the target text (TT), actually appearing more frequently inSTs in our case studies. We present a range of combinations for the convergence and divergence of L1, L2 and L3, for the case of feature films and their translations using examples from dubbed and subtitled versions of films, but we are hopeful that our tentative conclusions may be relevant to other modalities of translation, audiovisual and otherwise. When L3 appears in an audiovisual ST,we find a variety of solutions whereby L3 is deleted from or adapted to the TT.In the latter case, L3 might be rendered in a number of ways, depending on factors such as the audience’s familiarity with L3, and the possibility that L3 inthe ST is an invented language.
Resumo:
miR-21 is the most commonly over-expressed microRNA (miRNA) in cancer and a proven oncogene. Hsa-miR-21 is located on chromosome 17q23.2, immediately downstream of the vacuole membrane protein-1 (VMP1) gene, also known as TMEM49. VMP1 transcripts initiate ∼130 kb upstream of miR-21, are spliced, and polyadenylated only a few hundred base pairs upstream of the miR-21 hairpin. On the other hand, primary miR-21 transcripts (pri-miR-21) originate within the last introns of VMP1, but bypass VMP1 polyadenylation signals to include the miR-21 hairpin. Here, we report that VMP1 transcripts can also bypass these polyadenylation signals to include miR-21, thus providing a novel and independently regulated source of miR-21, termed VMP1–miR-21. Northern blotting, gene-specific RT-PCR, RNA pull-down and DNA branching assays support that VMP1–miR-21 is expressed at significant levels in a number of cancer cell lines and that it is processed by the Microprocessor complex to produce mature miR-21. VMP1 and pri-miR-21 are induced by common stimuli, such as phorbol-12-myristate-13-acetate (PMA) and androgens, but show differential responses to some stimuli such as epigenetic modifying agents. Collectively, these results indicate that miR-21 is a unique miRNA capable of being regulated by alternative polyadenylation and two independent gene promoters.
Resumo:
In this research work we searched for open source libraries which supports graph drawing and visualisation and can run in a browser. Subsequent these libraries were evaluated to find out which one is the best for this task. The result was the d3.js is that library which has the greatest functionality, flexibility and customisability. Afterwards we developed an open source software tool where d3.js was included and which was written in JavaScript so that it can run browser-based.