826 resultados para Population set-based methods
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Nonculture based methods for the detection of infections caused by fungal pathogens are becoming more important tools in the management of infected patients. Detection of fungal antigens and DNA appear to be the most promising in this respect for both opportunistic and endemic mycoses. In this article we present an overview of the most recent developments in nonculture based methods and examine their value in clinical practice.
Resumo:
The objective of this study was to evaluate the effective number of founders and ancestors, generation intervals and completeness of pedigree in Jaffarabadi breed buffaloes raised in Brazil. Pedigree records of 1,272 animals born from 1966 were used. The parameters were estimated using ENDOG, computational population genetic software. The obtained value for completeness of pedigree was 99.5, 50.9, and 20.5 for, the first, second and third generations, respectively. Generation interval estimates expressed in years and considering different pathways were 12.28 +/- 6.90 (sire-son), 11.55 +/- 6.07 (sire-daughter), 8.20 +/- 2.63 (dam-son) and 8.794 +/-.33 (dam-daughter). The overall average generation interval was 10.17 +/- 5.43 years. The number of founders, equivalent founders and ancestor animals that contributed for the genetic diversity in the reference population (1059) were 136, 130 and 134, respectively. Effective number of founder (f(e)=8) and ancestors (f(a)=7) were small, and the calculated expected inbreeding increase per generation was 4.99%. Four ancestors explained 50% of the genetic variability in the population and the major ancestor contributed with approximately 33% of the total population genetic variation. The genetic diversity within the current population is low as a consequence of a reduced number of ancestors.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Resumo:
This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.
Resumo:
The authors conducted an in vivo study to determine clinical cutoffs for a laser fluorescence (LF) device, an LF pen and a fluorescence camera (FC), as well as to evaluate the clinical performance of these methods and conventional methods in detecting occlusal caries in permanent teeth by using the histologic gold standard for total validation of the sample.
Resumo:
Understanding the canopy cover of an urban environment leads to better estimates of carbon storage and more informed management decisions by urban foresters. The most commonly used method for assessing urban forest cover type extent is ground surveys, which can be both timeconsuming and expensive. The analysis of aerial photos is an alternative method that is faster, cheaper, and can cover a larger number of sites, but may be less accurate. The objectives of this paper were (1) to compare three methods of cover type assessment for Los Angeles, CA: handdelineation of aerial photos in ArcMap, supervised classification of aerial photos in ERDAS Imagine, and ground-collected data using the Urban Forest Effects (UFORE) model protocol; (2) to determine how well remote sensing methods estimate carbon storage as predicted by the UFORE model; and (3) to explore the influence of tree diameter and tree density on carbon storage estimates. Four major cover types (bare ground, fine vegetation, coarse vegetation, and impervious surfaces) were determined from 348 plots (0.039 ha each) randomly stratified according to land-use. Hand-delineation was better than supervised classification at predicting ground-based measurements of cover type and UFORE model-predicted carbon storage. Most error in supervised classification resulted from shadow, which was interpreted as unknown cover type. Neither tree diameter or tree density per plot significantly affected the relationship between carbon storage and canopy cover. The efficiency of remote sensing rather than in situ data collection allows urban forest managers the ability to quickly assess a city and plan accordingly while also preserving their often-limited budget.
Resumo:
The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly exhibit a disastrous influence on the online reputation of organizations. Based on social Web data, this study describes the building of an ontology based on fuzzy sets. At the end of a recurring harvesting of folksonomies by Web agents, the aggregated tags are purified, linked, and transformed to a so-called fuzzy grassroots ontology by means of a fuzzy clustering algorithm. This self-updating ontology is used for online reputation analysis, a crucial task of reputation management, with the goal to follow the online conversation going on around an organization to discover and monitor its reputation. In addition, an application of the Fuzzy Online Reputation Analysis (FORA) framework, lesson learned, and potential extensions are discussed in this article.
Resumo:
This study aimed to evaluate the effectiveness of fluorescence-based methods (DIAGNOdent, LF; DIAGNOdent pen, LFpen, and VistaProof fluorescence camera, FC) in detecting demineralization and remineralization on smooth surfaces in situ. Ten volunteers wore acrylic palatal appliances, each containing 6 enamel blocks that were demineralized for 14 days by exposure to a 20% sucrose solution and 3 of them were remineralized for 7 days with fluoride dentifrice. Sixty enamel blocks were evaluated at baseline, after demineralization and 30 blocks after remineralization by two examiners using LF, LFpen and FC. They were submitted to surface microhardness (SMH) and cross-sectional microhardness analysis. The integrated loss of surface hardness (ΔKHN) was calculated. The intraclass correlation coefficient for interexaminer reproducibility ranged from 0.21 (FC) to 0.86 (LFpen). SMH, LF and LFpen values presented significant differences among the three phases. However, FC fluorescence values showed no significant differences between the demineralization and remineralization phases. Fluorescence values for baseline, demineralized and remineralized enamel were, respectively, 5.4 ± 1.0, 9.2 ± 2.2 and 7.0 ± 1.5 for LF; 10.5 ± 2.0, 15.0 ± 3.2 and 12.5 ± 2.9 for LFpen, and 1.0 ± 0.0, 1.0 ± 0.1 and 1.0 ± 0.1 for FC. SMH and ΔKHN showed significant differences between demineralization and remineralization phases. There was a negative and significant correlation between SMH and LF and LFpen in the remineralization phase. In conclusion, LF and LFpen devices were effective in detecting demineralization and remineralization on smooth surfaces provoked in situ.
Resumo:
Although there has been a significant decrease in caries prevalence in developed countries, the slower progression of dental caries requires methods capable of detecting and quantifying lesions at an early stage. The aim of this study was to evaluate the effectiveness of fluorescence-based methods (DIAGNOdent 2095 laser fluorescence device [LF], DIAGNOdent 2190 pen [LFpen], and VistaProof fluorescence camera [FC]) in monitoring the progression of noncavitated caries-like lesions on smooth surfaces. Caries-like lesions were developed in 60 blocks of bovine enamel using a bacterial model of Streptococcus mutans and Lactobacillus acidophilus . Enamel blocks were evaluated by two independent examiners at baseline (phase I), after the first cariogenic challenge (eight days) (phase II), and after the second cariogenic challenge (a further eight days) (phase III) by two independent examiners using the LF, LFpen, and FC. Blocks were submitted to surface microhardness (SMH) and cross-sectional microhardness analyses. The intraclass correlation coefficient for intra- and interexaminer reproducibility ranged from 0.49 (FC) to 0.94 (LF/LFpen). SMH values decreased and fluorescence values increased significantly among the three phases. Higher values for sensitivity, specificity, and area under the receiver operating characteristic curve were observed for FC (phase II) and LFpen (phase III). A significant correlation was found between fluorescence values and SMH in all phases and integrated loss of surface hardness (ΔKHN) in phase III. In conclusion, fluorescence-based methods were effective in monitoring noncavitated caries-like lesions on smooth surfaces, with moderate correlation with SMH, allowing differentiation between sound and demineralized enamel.