974 resultados para Information Aggregation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we investigated the size, submicrometer-scale structure, and aggregation state of ZnS formed by sulfate-reducing bacteria (SRB) in a SRB-dominated biofilm growing on degraded wood in cold (Tsimilar to8degreesC), circumneutral-pH (7.2-8.5) waters draining from an abandoned, carbonate-hosted Pb-Zn mine. High-resolution transmission electron microscope (HRTEM) data reveal that the earliest biologically induced precipitates are crystalline ZnS nanoparticles 1-5 nm in diameter. Although most nanocrystals have the sphalerite structure, nanocrystals of wurtzite are also present, consistent with a predicted size dependence for ZnS phase stability. Nearly all the nanocrystals are concentrated into 1-5 mum diameter spheroidal aggregates that display concentric banding patterns indicative of episodic precipitation and flocculation. Abundant disordered stacking sequences and faceted, porous crystal-aggregate morphologies are consistent with aggregation-driven growth of ZnS nanocrystals prior to and/or during spheroid formation. Spheroids are typically coated by organic polymers or associated with microbial cellular surfaces, and are concentrated roughly into layers within the biofilm. Size, shape, structure, degree of crystallinity, and polymer associations will all impact ZnS solubility, aggregation and coarsening behavior, transport in groundwater, and potential for deposition by sedimentation. Results presented here reveal nanometer- to micrometer-scale attributes of biologically induced ZnS formation likely to be relevant to sequestration via bacterial sulfate reduction (BSR) of other potential contaminant metal(loid)s, such as Pb2+, Cd2+, As3+ and Hg2+, into metal sulfides. The results highlight the importance of basic mineralogical information for accurate prediction and monitoring of long-term contaminant metal mobility and bioavailability in natural and constructed bioremediation systems. Our observations also provoke interesting questions regarding the role of size-dependent phase stability in biomineralization and provide new insights into the origin of submicrometer- to millimeter-scale petrographic features observed in low-temperature sedimentary sulfide ore deposits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zooplankton reside in a constantly flowing environment. However, information about their response to ambient flow has remained elusive, because of the difficulties of following the individual motions of these minute, nearly transparent animals in the ocean. Using a three-dimensional acoustic imaging system, we tracked >375,000 zooplankters at two coastal sites in the Red Sea. Resolution of their motion from that of the water showed that the animals effectively maintained their depth by swimming against upwelling and downwelling currents moving at rates of up to tens of body lengths per second, causing their accumulation at frontal zones. This mechanism explains how oceanic fronts become major feeding grounds for predators and targets for fishermen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incorporating further information into the ordered weighted averaging (OWA) operator weights is investigated in this paper. We first prove that for a constant orness the minimax disparity model [13] has unique optimal solution while the modified minimax disparity model [16] has alternative optimal OWA weights. Multiple optimal solutions in modified minimax disparity model provide us opportunity to define a parametric aggregation OWA which gives flexibility to decision makers in the process of aggregation and selecting the best alternative. Finally, the usefulness of the proposed parametric aggregation method is illustrated with an application in metasearch engine. © 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information extraction or knowledge discovery from large data sets should be linked to data aggregation process. Data aggregation process can result in a new data representation with decreased number of objects of a given set. A deterministic approach to separable data aggregation means a lesser number of objects without mixing of objects from different categories. A statistical approach is less restrictive and allows for almost separable data aggregation with a low level of mixing of objects from different categories. Layers of formal neurons can be designed for the purpose of data aggregation both in the case of deterministic and statistical approach. The proposed designing method is based on minimization of the of the convex and piecewise linear (CPL) criterion functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the nonparametric framework of Data Envelopment Analysis the statistical properties of its estimators have been investigated and only asymptotic results are available. For DEA estimators results of practical use have been proved only for the case of one input and one output. However, in the real world problems the production process is usually well described by many variables. In this paper a machine learning approach to variable aggregation based on Canonical Correlation Analysis is presented. This approach is applied for efficiency estimation of all the farms in Terceira Island of the Azorean archipelago.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the wisdom of crowds---combining many individual forecasts to obtain an aggregate estimate---can be an effective technique for improving forecast accuracy. When individual forecasts are drawn from independent and identical information sources, a simple average provides the optimal crowd forecast. However, correlated forecast errors greatly limit the ability of the wisdom of crowds to recover the truth. In practice, this dependence often emerges because information is shared: forecasters may to a large extent draw on the same data when formulating their responses.

To address this problem, I propose an elicitation procedure in which each respondent is asked to provide both their own best forecast and a guess of the average forecast that will be given by all other respondents. I study optimal responses in a stylized information setting and develop an aggregation method, called pivoting, which separates individual forecasts into shared and private information and then recombines these results in the optimal manner. I develop a tailored pivoting procedure for each of three information models, and introduce a simple and robust variant that outperforms the simple average across a variety of settings.

In three experiments, I investigate the method and the accuracy of the crowd forecasts. In the first study, I vary the shared and private information in a controlled environment, while the latter two studies examine forecasts in real-world contexts. Overall, the data suggest that a simple minimal pivoting procedure provides an effective aggregation technique that can significantly outperform the crowd average.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual) using hierarchical models that assumed a Poisson, negative binomial (NB) or zero-inflated negative binomial (ZINB) count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1) in the wrong direction, or (2) with credible intervals that excluded the true trend, as compared with hourly and daily counts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Histological and histochemical observations support the hypothesis that collagen fibers can link to elastic fibers. However, the resulting organization of elastin and collagen type complexes and differences between these materials in terms of macromolecular orientation and frequencies of their chemical vibrational groups have not yet been solved. This study aimed to investigate the macromolecular organization of pure elastin, collagen type I and elastin-collagen complexes using polarized light DIC-microscopy. Additionally, differences and similarities between pure elastin and collagen bundles (CB) were investigated by Fourier transform-infrared (FT-IR) microspectroscopy. Although elastin exhibited a faint birefringence, the elastin-collagen complex aggregates formed in solution exhibited a deep birefringence and formation of an ordered-supramolecular complex typical of collagen chiral structure. The FT-IR study revealed elastin and CB peptide NH groups involved in different types of H-bonding. More energy is absorbed in the vibrational transitions corresponding to CH, CH2 and CH3 groups (probably associated with the hydrophobicity demonstrated by 8-anilino-1-naphtalene sulfonic acid sodium salt [ANS] fluorescence), and to νCN, δNH and ωCH2 groups of elastin compared to CB. It is assumed that the α-helix contribution to the pure elastin amide I profile is 46.8%, whereas that of the B-sheet is 20% and that unordered structures contribute to the remaining percentage. An FT-IR profile library reveals that the elastin signature within the 1360-1189cm(-1) spectral range resembles that of Conex-Toray aramid fibers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To assess the completeness and reliability of the Information System on Live Births (Sinasc) data. A cross-sectional analysis of the reliability and completeness of Sinasc's data was performed using a sample of Live Birth Certificate (LBC) from 2009, related to births from Campinas, Southeast Brazil. For data analysis, hospitals were grouped according to category of service (Unified National Health System, private or both), 600 LBCs were randomly selected and the data were collected in LBC-copies through mothers and newborns' hospital records and by telephone interviews. The completeness of LBCs was evaluated, calculating the percentage of blank fields, and the LBCs agreement comparing the originals with the copies was evaluated by Kappa and intraclass correlation coefficients. The percentage of completeness of LBCs ranged from 99.8%-100%. For the most items, the agreement was excellent. However, the agreement was acceptable for marital status, maternal education and newborn infants' race/color, low for prenatal visits and presence of birth defects, and very low for the number of deceased children. The results showed that the municipality Sinasc is reliable for most of the studied variables. Investments in training of the professionals are suggested in an attempt to improve system capacity to support planning and implementation of health activities for the benefit of maternal and child population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El Niño South Oscillation (ENSO) is one climatic phenomenon related to the inter-annual variability of global meteorological patterns influencing sea surface temperature and rainfall variability. It influences human health indirectly through extreme temperature and moisture conditions that may accelerate the spread of some vector-borne viral diseases, like dengue fever (DF). This work examines the spatial distribution of association between ENSO and DF in the countries of the Americas during 1995-2004, which includes the 1997-1998 El Niño, one of the most important climatic events of 20(th) century. Data regarding the South Oscillation index (SOI), indicating El Niño-La Niña activity, were obtained from Australian Bureau of Meteorology. The annual DF incidence (AIy) by country was computed using Pan-American Health Association data. SOI and AIy values were standardised as deviations from the mean and plotted in bars-line graphics. The regression coefficient values between SOI and AIy (rSOI,AI) were calculated and spatially interpolated by an inverse distance weighted algorithm. The results indicate that among the five years registering high number of cases (1998, 2002, 2001, 2003 and 1997), four had El Niño activity. In the southern hemisphere, the annual spatial weighted mean centre of epidemics moved southward, from 6° 31' S in 1995 to 21° 12' S in 1999 and the rSOI,AI values were negative in Cuba, Belize, Guyana and Costa Rica, indicating a synchrony between higher DF incidence rates and a higher El Niño activity. The rSOI,AI map allows visualisation of a graded surface with higher values of ENSO-DF associations for Mexico, Central America, northern Caribbean islands and the extreme north-northwest of South America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scope of this study is to identify the prevalence of access to information about how to prevent oral problems among schoolchildren in the public school network, as well as the factors associated with such access. This is a cross-sectional and analytical study conducted among 12-year-old schoolchildren in a Brazilian municipality with a large population. The examinations were performed by 24 trained dentists and calibrated with the aid of 24 recorders. Data collection occurred in 36 public schools selected from the 89 public schools of the city. Descriptive, univariate and multiple analyses were conducted. Of the 2510 schoolchildren included in the study, 2211 reported having received information about how to prevent oral problems. Access to such information was greater among those who used private dental services; and lower among those who used the service for treatment, who evaluated the service as regular or bad/awful. The latter use toothbrush only or toothbrush and tongue scrubbing as a means of oral hygiene and who reported not being satisfied with the appearance of their teeth. The conclusion drawn is that the majority of schoolchildren had access to information about how to prevent oral problems, though access was associated with the characteristics of health services, health behavior and outcomes.