937 resultados para Text analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermogravimetric results are influenced by a series of experimental factors, such as furnace heating rate and atmosphere, velocity of carrier gas, sample mass, etc. In this work a practical evaluation of these parameters are showed for calcium oxalate, with teaching objectives, considering that undergraduate text books discuss but do not show experimental details for these cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. The purpose of this study was to evaluate the discrepancies between abstracts presented at the IADR meeting (2004-2005) and their full-text publication. Material and Methods. Abstracts from the Prosthodontic Section of IADR meeting were obtained. The following information was collected: abstract title, number of authors, study design, statistical analysis, outcome, and funding source. PubMed was used to identify the full-text publication of the abstracts. The discrepancies between the abstract and the full-text publication were examined, categorized as major and minor discrepancies, and quantified. The data were collected and analyzed using descriptive analysis. Frequency and percentage of major and minor discrepancies were calculated. Results. A total of 109 (95.6%) articles showed changes from their abstracts. Seventy-four (65.0%) and 105 (92.0%) publications had at least one major and one minor discrepancies, respectively. Minor discrepancies were more prevalent (92.0%) than major discrepancies (65.0%). The most common minor discrepancy was observed in the title (80.7%), and most common major discrepancies were seen in results (48.2%). Conclusion. Minor discrepancies were more prevalent than major discrepancies. The data presented in this study may be useful to establish a more comprehensive structured abstract requirement for future meetings. © 2012 Soni Prasad et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An epidemiological survey for the monitoring of bovine tuberculosis transmission was carried out in western Liguria, a region in northern Italy. Fifteen Mycobacterium bovis strains were isolated from 63 wild boar samples (62 from mandibular lymph nodes and 1 from a liver specimen). Sixteen mediastinal lymph nodes of 16 head of cattle were collected, and 15 Mycobacterium bovis strains were subsequently cultured. All M. bovisstrains isolated from cattle and wild boars were genotyped by spoligotyping and by restriction fragment length polymorphism (RFLP) analysis with the IS6110 and IS1081 probes. All M. bovis strains showed the typical spoligotype characterized by the absence of the 39 to 43 spacers in comparison with the number in M. tuberculosis. A total of nine different clusters were identified by spoligotyping. The largest cluster included 9 strains isolated from wild boars and 11 strains isolated from cattle, thus confirming the possibility of transmission between the two animal species. Fingerprinting by RFLP analysis with the IS6110 probe showed an identical single-band pattern for 29 of 30 strains analyzed, and only 1 strain presented a five-band pattern. The use of IS1081 as a second probe was useful for differentiation of M. bovis from M. bovis BCG but not for differentiation among M. bovis strains, which presented the same undifferentiated genomic profile. In relation to the epidemiological investigation, we hypothesized that the feeding in pastures contaminated by cattle discharges could represent the most probable route of transmission of M. bovis between the two animal species. In conclusion, our results confirmed the higher discriminatory power of spoligotyping in relation to that of RFLP analysis for the differentiation of M. bovis genomic profiles. Our data showed the presence of a common M. bovis genotype in both cattle and wild boars, confirming the possible interspecies transmission of M. bovis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report summarizes the financial and production records of 139 dairy farms from throughout Michigan in 2006. To be included, the farms must have produced at least 50 percent of gross cash farm income from milk and dairy animal sales. The records came from Michigan State University’s TelFarm project and the Farm Credit Service system in Michigan. The values were pooled into averages for reporting purposes. The farms are larger than would be the average of all dairy farms in Michigan. While considerable variation in the data exists, average values are reported in the summary tables and discussion that follows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveys of commercial markets combined with molecular taxonomy (i.e. molecular monitoring) provide a means to detect products from illegal, unregulated and/or unreported (IUU) exploitation, including the sale of fisheries bycatch and wild meat (bushmeat). Capture-recapture analyses of market products using DNA profiling have the potential to estimate the total number of individuals entering the market. However, these analyses are not directly analogous to those of living individuals because a ‘market individual’ does not die suddenly but, instead, remains available for a time in decreasing quantities, rather like the exponential decay of a radioactive isotope. Here we use mitochondrial DNA (mtDNA) sequences and microsatellite genotypes to individually identify products from North Pacific minke whales (Balaenoptera acutorostrata ssp.) purchased in 12 surveys of markets in the Republic of (South) Korea from 1999 to 2003. By applying a novel capture-recapture model with a decay rate parameter to the 205 unique DNA profiles found among 289 products, we estimated that the total number of whales entering trade across the five-year survey period was 827 (SE, 164; CV, 0.20) and that the average ‘half-life’ of products from an individual whale on the market was 1.82 months (SE, 0.24; CV, 0.13). Our estimate of whales in trade (reflecting the true numbers killed) was significantly greater than the officially reported bycatch of 458 whales for this period. This unregulated exploitation has serious implications for the survival of this genetically distinct coastal population. Although our capture-recapture model was developed for specific application to the Korean whale-meat markets, the exponential decay function could be modified to improve the estimates of trade in other wildmeat or fisheries markets or abundance of living populations by noninvasive genotyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small businesses are considered important engines for job growth and economic development by policy makers worldwide. One of the most commonly cited constraints of small businesses is a lack of access to capital. To address this constraint, small business loan guarantee programs have been established in over 100 countries. There are a variety of types of guarantee funds, with the most significant differences being which borrowers are eligible for guarantees, and how borrowers are approved for guarantees. There is currently no clear delineation between types of programs and the economic conditions they operate in, though some trends are becoming apparent. However, these trends may not be leading to the best economic outcomes possible. By better matching the structure of the guarantee fund to the economic conditions it operates in, the program’s success in meeting economic development goals may be greatly improved. Many programs in developing countries may not be taking advantage of bank expertise and may be limiting the scope of their effectiveness. At the same time, programs in developed countries may be wasting resources by scattering their efforts too thinly and subsidizing less competitive firms to the detriment of local economic development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sparse traffic grooming is a practical problem to be addressed in heterogeneous multi-vendor optical WDM networks where only some of the optical cross-connects (OXCs) have grooming capabilities. Such a network is called as a sparse grooming network. The sparse grooming problem under dynamic traffic in optical WDM mesh networks is a relatively unexplored problem. In this work, we propose the maximize-lightpath-sharing multi-hop (MLS-MH) grooming algorithm to support dynamic traffic grooming in sparse grooming networks. We also present an analytical model to evaluate the blocking performance of the MLS-MH algorithm. Simulation results show that MLSMH outperforms an existing grooming algorithm, the shortest path single-hop (SPSH) algorithm. The numerical results from analysis show that it matches closely with the simulation. The effect of the number of grooming nodes in the network on the blocking performance is also analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a general framework for the analysis of animal telemetry data through the use of weighted distributions. It is shown that several interpretations of resource selection functions arise when constructed from the ratio of a use and availability distribution. Through the proposed general framework, several popular resource selection models are shown to be special cases of the general model by making assumptions about animal movement and behavior. The weighted distribution framework is shown to be easily extended to readily account for telemetry data that are highly auto-correlated; as is typical with use of new technology such as global positioning systems animal relocations. An analysis of simulated data using several models constructed within the proposed framework is also presented to illustrate the possible gains from the flexible modeling framework. The proposed model is applied to a brown bear data set from southeast Alaska.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Static analysis tools report software defects that may or may not be detected by other verification methods. Two challenges complicating the adoption of these tools are spurious false positive warnings and legitimate warnings that are not acted on. This paper reports automated support to help address these challenges using logistic regression models that predict the foregoing types of warnings from signals in the warnings and implicated code. Because examining many potential signaling factors in large software development settings can be expensive, we use a screening methodology to quickly discard factors with low predictive power and cost-effectively build predictive models. Our empirical evaluation indicates that these models can achieve high accuracy in predicting accurate and actionable static analysis warnings, and suggests that the models are competitive with alternative models built without screening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter explores the characteristics of 114 American teenagers' Jewish identities using data from the National Study of Youth and Religion (NSYR). The NSYR includes a telephone survey of a nationally representative sample of 3,290 adolescents aged 13 to 17. Jewish teenagers were over-sampled, resulting in a total of 3,370 teenage participants. Of the NSYR teens surveyed, 141 have at least one Jewish parent and 114 of them identify as Jewish. The NSYR also includes in-depth face-to-face interviews with a total of 267 U.S. teens: 23 who have at least one Jewish parent and 18 who identify as Jewish. The following analysis draws upon quantitative data from the 114 teens who identified themselves as Jewish in the face-to-face interviews.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Masticatory muscle contraction causes both jaw movement and tissue deformation during function. Natural chewing data from 25 adult miniature pigs were studied by means of time series analysis. The data set included simultaneous recordings of electromyography (EMG) from bilateral masseter (MA), zygomaticomandibularis (ZM) and lateral pterygoid muscles, bone surface strains from the left squamosal bone (SQ), condylar neck (CD) and mandibular corpus (MD), and linear deformation of the capsule of the jaw joint measured bilaterally using differential variable reluctance transducers. Pairwise comparisons were examined by calculating the cross-correlation functions. Jaw-adductor muscle activity of MA and ZM was found to be highly cross-correlated with CD and SQ strains and weakly with MD strain. No muscle’s activity was strongly linked to capsular deformation of the jaw joint, nor were bone strains and capsular deformation tightly linked. Homologous muscle pairs showed the greatest synchronization of signals, but the signals themselves were not significantly more correlated than those of non-homologous muscle pairs. These results suggested that bone strains and capsular deformation are driven by different mechanical regimes. Muscle contraction and ensuing reaction forces are probably responsible for bone strains, whereas capsular deformation is more likely a product of movement.