20 resultados para Presence-only data

em Universidade do Minho


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As huge amounts of data become available in organizations and society, specific data analytics skills and techniques are needed to explore this data and extract from it useful patterns, tendencies, models or other useful knowledge, which could be used to support the decision-making process, to define new strategies or to understand what is happening in a specific field. Only with a deep understanding of a phenomenon it is possible to fight it. In this paper, a data-driven analytics approach is used for the analysis of the increasing incidence of fatalities by pneumonia in the Portuguese population, characterizing the disease and its incidence in terms of fatalities, knowledge that can be used to define appropriate strategies that can aim to reduce this phenomenon, which has increased more than 65% in a decade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainability is frequently defined by its three pillars: economically viable, socially equitable, and environmentally bearable. Consequently the evaluation of the sustainability of any decision, public or private, requires information on these three dimensions. This paper focuses on social sustainability. In the context of renewable energy sources, the examination of social sustainability requires the analysis of not only the efficiency but also the equity of its welfare impacts. The present paper proposes and applies a methodology to generate the information necessary to do a proper welfare analysis of the social sustainability of renewable energy production facilities. This information is key both for an equity and an efficiency analysis. The analysis focuses on the case of investments in renewable energy electricity production facilities, where the impacts on local residents’ welfare are often significantly different than the welfare effects on the general population. We apply the contingent valuation method to selected facilities across the different renewable energy power plants located in Portugal and conclude that local residents acknowledge differently the damage sustained by the type, location and operation of the plants. The results from these case studies attest to the need of acknowledging and quantifying the negative impacts on local communities when assessing the economic viability, social equity and environmental impact of renewable energy projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are living in the era of Big Data. A time which is characterized by the continuous creation of vast amounts of data, originated from different sources, and with different formats. First, with the rise of the social networks and, more recently, with the advent of the Internet of Things (IoT), in which everyone and (eventually) everything is linked to the Internet, data with enormous potential for organizations is being continuously generated. In order to be more competitive, organizations want to access and explore all the richness that is present in those data. Indeed, Big Data is only as valuable as the insights organizations gather from it to make better decisions, which is the main goal of Business Intelligence. In this paper we describe an experiment in which data obtained from a NoSQL data source (database technology explicitly developed to deal with the specificities of Big Data) is used to feed a Business Intelligence solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de Doutoramento em Engenharia Têxtil

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results of a search for charged Higgs bosons decaying to a τ lepton and a neutrino, H±→τ±ν, are presented. The analysis is based on 19.5 fb−1 of proton--proton collision data at s√=8 TeV collected by the ATLAS experiment at the Large Hadron Collider. Charged Higgs bosons are searched for in events consistent with top-quark pair production or in associated production with a top quark. The final state is characterised by the presence of a hadronic τ decay, missing transverse momentum, b-tagged jets, a hadronically decaying W boson, and the absence of any isolated electrons or muons with high transverse momenta. The data are consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the product of branching ratios B(t→bH±)×B(H±→τ±ν), between 0.23% and 1.3% for charged Higgs boson masses in the range 80--160 GeV. It also leads to 95% confidence-level upper limits on the production cross section times branching ratio, σ(pp→tH±+X)×B(H±→τ±ν), between 0.76 pb and 4.5 fb, for charged Higgs boson masses ranging from 180 GeV to 1000 GeV. In the context of different scenarios of the Minimal Supersymmetric Standard Model, these results exclude nearly all values of tanβ above one for charged Higgs boson masses between 80 GeV and 160 GeV, and exclude a region of parameter space with high tanβ for H± masses between 200 GeV and 250 GeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nitrogen dioxide is a primary pollutant, regarded for the estimation of the air quality index, whose excessive presence may cause significant environmental and health problems. In the current work, we suggest characterizing the evolution of NO2 levels, by using geostatisti- cal approaches that deal with both the space and time coordinates. To develop our proposal, a first exploratory analysis was carried out on daily values of the target variable, daily measured in Portugal from 2004 to 2012, which led to identify three influential covariates (type of site, environment and month of measurement). In a second step, appropriate geostatistical tools were applied to model the trend and the space-time variability, thus enabling us to use the kriging techniques for prediction, without requiring data from a dense monitoring network. This method- ology has valuable applications, as it can provide accurate assessment of the nitrogen dioxide concentrations at sites where either data have been lost or there is no monitoring station nearby.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rational manipulation of mRNA folding free energy allows rheostat control of pneumolysin production by Streptococcus pneumoniae

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for the associated production of the Higgs boson with a top quark pair is performed in multilepton final states using 20.3 fb−1 of proton-proton collision data recorded by the ATLAS experiment at s√=8 TeV at the Large Hadron Collider. Five final states, targeting the decays H→WW∗, ττ, and ZZ∗, are examined for the presence of the Standard Model (SM) Higgs boson: two same-charge light leptons (e or μ) without a hadronically decaying τ lepton; three light leptons; two same-charge light leptons with a hadronically decaying τ lepton; four light leptons; and one light lepton and two hadronically decaying τ leptons. No significant excess of events is observed above the background expectation. The best fit for the tt¯H production cross section, assuming a Higgs boson mass of 125 GeV, is 2.1+1.4−1.2 times the SM expectation, and the observed (expected) upper limit at the 95% confidence level is 4.7 (2.4) times the SM rate. The p-value for compatibility with the background-only hypothesis is 1.8σ; the expectation in the presence of a Standard Model signal is 0.9σ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the concept, technical realisation and validation of a largely data-driven method to model events with Z→ττ decays. In Z→μμ events selected from proton-proton collision data recorded at s√=8 TeV with the ATLAS experiment at the LHC in 2012, the Z decay muons are replaced by τ leptons from simulated Z→ττ decays at the level of reconstructed tracks and calorimeter cells. The τ lepton kinematics are derived from the kinematics of the original muons. Thus, only the well-understood decays of the Z boson and τ leptons as well as the detector response to the τ decay products are obtained from simulation. All other aspects of the event, such as the Z boson and jet kinematics as well as effects from multiple interactions, are given by the actual data. This so-called τ-embedding method is particularly relevant for Higgs boson searches and analyses in ττ final states, where Z→ττ decays constitute a large irreducible background that cannot be obtained directly from data control samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de Doutoramento em Biologia Ambiental e Molecular

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.