887 resultados para Beam Search Method
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Introduction: In the digital environment, metadata influence both in data access and information retrieval and are used as search elements to facilitate locating resources on the Web. Objective: In this perspective, the aim is to present the methodology BEAM, developed in Biblioteca de Estudos e Aplicação de Metadados, of the Research Group “Novas Tecnologias em Informação” in Universidade Estadual Paulista and used to define the metadata for describing information resources. Methodology: The methodology used for the construction of the research is exploratory and bibliographic and was developed based on the theoretical method Chuttur (2011) and the life cycle of data from the DataOne (2012) and also the PDCA cycle and tool 5W1H . Results: The seven steps of the methodology are presented and also the necessary guidelines for their implementation. Conclusions: We conclude pointing BEAM methodology that can be adopted by libraries in the construction of catalogs aimed at meeting the needs of users.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Lianas can change forest dynamics, slowing down forest regeneration after a perturbation. In these cases, it may be necessary to manage these woody climbers. Our aim was to simulate two management strategies: (1) focusing on abundant liana species and (2) focusing on the largest lianas, and contrast them with the random removal of lianas. We applied mathematical simulations for liana removal in three different vegetation types in southeastern Brazil: a Rainforest, a Seasonal Tropical Forest, and a Woodland Savanna. Using these samples, we performed simulations based on two liana removal procedures and compared them with random removal. We also used regression analysis with quasi-Poisson distribution to test whether larger lianas were aggressive, i.e., if they climbed into many trees. The procedure of cutting larger lianas was as effective as cutting them randomly and proved not to be a good method for liana management. Moreover, most of the lianas climbed into one or two trees, i.e., were not aggressive. Cutting the most abundant lianas proved to be a more effective method than cutting lianas randomly. This method could maintain liana richness and presumably should accelerate forest regeneration.
Resumo:
Software developers are often unsure of the exact name of the method they need to use to invoke the desired behavior in a given context. This results in a process of searching for the correct method name in documentation, which can be lengthy and distracting to the developer. We can decrease the method search time by enhancing the documentation of a class with the most frequently used methods. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem - written in the same language and sharing dependencies. We implemented a proof of concept of the approach for Pharo Smalltalk and Java. In Pharo Smalltalk, methods are commonly searched for using a code browser tool called "Nautilus", and in Java using a web browser displaying HTML based documentation - Javadoc. We developed plugins for both browsers and gathered method usage data from open source projects, in order to increase developer productivity by reducing method search time. A small initial evaluation has been conducted showing promising results in improving developer productivity.
Resumo:
In this work, a new design concept of SMS moving optics is developed, in which the movement is no longer lateral but follows a curved trajectory calculated in the design process. Curved tracking trajectory helps to broaden the incident angle?s range significantly. We have chosen an afocal-type structure which aim to direct the parallel rays of large incident angles to parallel output rays. The RMS of the divergence angle of the output rays remains below 1 degree for an incident angular range of ±450. Potential applications of this beam-steering device are: skylights to provide steerable natural illumination, building integrated CPV systems, and steerable LED illumination.
Application of the Boundary Method to the determination of the properties of the beam cross-sections
Resumo:
Using the 3-D equations of linear elasticity and the asylllptotic expansion methods in terms of powers of the beam cross-section area as small parameter different beam theories can be obtained, according to the last term kept in the expansion. If it is used only the first two terms of the asymptotic expansion the classical beam theories can be recovered without resort to any "a priori" additional hypotheses. Moreover, some small corrections and extensions of the classical beam theories can be found and also there exists the possibility to use the asymptotic general beam theory as a basis procedure for a straightforward derivation of the stiffness matrix and the equivalent nodal forces of the beam. In order to obtain the above results a set of functions and constants only dependent on the cross-section of the beam it has to be computed them as solutions of different 2-D laplacian boundary value problems over the beam cross section domain. In this paper two main numerical procedures to solve these boundary value pf'oblems have been discussed, namely the Boundary Element Method (BEM) and the Finite Element Method (FEM). Results for some regular and geometrically simple cross-sections are presented and compared with ones computed analytically. Extensions to other arbitrary cross-sections are illustrated.
Resumo:
We introduce a computational method to optimize the in vitro evolution of proteins. Simulating evolution with a simple model that statistically describes the fitness landscape, we find that beneficial mutations tend to occur at amino acid positions that are tolerant to substitutions, in the limit of small libraries and low mutation rates. We transform this observation into a design strategy by applying mean-field theory to a structure-based computational model to calculate each residue's structural tolerance. Thermostabilizing and activity-increasing mutations accumulated during the experimental directed evolution of subtilisin E and T4 lysozyme are strongly directed to sites identified by using this computational approach. This method can be used to predict positions where mutations are likely to lead to improvement of specific protein properties.
Resumo:
Questions of "viability" evaluation of innovation projects are considered in this article. As a method of evaluation Hidden Markov Models are used. Problem of determining model parameters, which reproduce test data with highest accuracy are solving. For training the model statistical data on the implementation of innovative projects are used. Baum-Welch algorithm is used as a training algorithm.
Resumo:
Mode of access: Internet.
Resumo:
We propose a novel finite element formulation that significantly reduces the number of degrees of freedom necessary to obtain reasonably accurate approximations of the low-frequency component of the deformation in boundary-value problems. In contrast to the standard Ritz–Galerkin approach, the shape functions are defined on a Lie algebra—the logarithmic space—of the deformation function. We construct a deformation function based on an interpolation of transformations at the nodes of the finite element. In the case of the geometrically exact planar Bernoulli beam element presented in this work, these transformation functions at the nodes are given as rotations. However, due to an intrinsic coupling between rotational and translational components of the deformation function, the formulation provides for a good approximation of the deflection of the beam, as well as of the resultant forces and moments. As both the translational and the rotational components of the deformation function are defined on the logarithmic space, we propose to refer to the novel approach as the “Logarithmic finite element method”, or “LogFE” method.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.