944 resultados para Simple methods
Resumo:
In this paper we present simple methods for construction and evaluation of finite-state spell-checking tools using an existing finite-state lexical automaton, freely available finite-state tools and Internet corpora acquired from projects such as Wikipedia. As an example, we use a freely available open-source implementation of Finnish morphology, made with traditional finite-state morphology tools, and demonstrate rapid building of Northern Sámi and English spell checkers from tools and resources available from the Internet.
Resumo:
Simple methods of preparing boron nitride nanotubes and nanowires have been investigated. The methods involve heating boric acid with activated carbon, multi-walled carbon nanotubes, catalytic iron particles or a mixture of activated carbon and iron particles, in the presence of NH3. While with activated carbon, boron nitride nanowires constitute the primary product, high yields of clean boron nitride nanotubes are obtained with multi-walled carbon nanotubes. Aligned boron nitride nanotubes are produced when aligned multi-walled carbon nanotubes are employed as the starting material suggesting the templating role of the nanotubes. Boron nitride nanotubes with different structures have been obtained by reacting boric acid with NH3 in the presence of a mixture of activated carbon and Fe particles. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The paper discusses simple methods of estimating fish yield from small reservoirs and establishes 2 indices of fish yield based on: 1) the relationship between the catch per boat in artisanal commercial fish landings and the catch per unit effort in experimental gill-net survey; and also, 2) the relationship between standing crop of fish in reservoirs and catch per unit effort in experimental gill-net survey. The paper then elaborates on the methods of utilizing these simple relationships in managing small reservoirs in Nigeria based on the principle of exclusive fishing right licence with the objective of attracting investors into this viable inland fishery investment project hitherto untapped
Resumo:
The influence of non-equilibrium condensation on the flow field and performance of a three stage low pressure model steam turbine is examined using modern three dimensional CFD techniques. An equilibrium steam model and a non-equilibrium steam model, which accounts for both subcooling and condensation effects, are used, and have been verified by comparison with test data in an earlier publication [1]. The differences in the calculated flow field and turbine performance with these models show that the latent heat released during condensation influences both the thermodynamic and the aerodynamic performance of the turbine, leading to a change in inlet flow angles of about 5°. The calculated three dimensional flowfield is used to investigate the magnitude and distribution of the additional thermo-dynamic wetness loss arising from steam condensation under non-equilibrium flow conditions. Three simple methods are described to calculate this, and all show that this amounts to around 6.5% of the total losses at the design condition. At other load conditions the wetness losses change in magnitude and axial distribution in the turbine. © 2010 by ASME.
Resumo:
Two simple methods for estimating the potential modulation bandwidth of TO packaging technique are presented. The first method is based upon the comparison of the measured frequency responses of the laser diodes and the TO laser modules, and the second is from the equivalent circuit for the test fixture, the TO header, the submount and the bonding wire. It is shown that the TO packaging techniques used in the experiments can potentially achieve a frequency bandwidth of over 10.5 GHz, and the two proposed methods give similar results.
Resumo:
Various metallized nanostructures (such as rings, wires with controllable lengths, spheres) have been successfully fabricated by coating metallic nanolayers onto soft nanotemplates through simple electroless methods. In particular, bimetallic nanostructures have been obtained by using simple methods. The multiple functional polymeric nanostructures, were obtained through the self-assembly of polystyrene/poly(4-vinyl pyridine) triblock copolymer (P4VP-b-PS-b-P4VP) in selective media by changing the common solvent properties. By combining field emission scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray photoelectron spectroscopy (XPS) characterization, it was confirmed that polymer/metal and bimetallic (Au@Ag) core-shell nanostructures could be achieved by chemical metal deposition method.
Resumo:
We have presented two simple methods of ''unfixed-position shield'' and ''pulling out'' for making sharp STM Pt-Ir tips with low aspect ratio by electrochemical etching in KCN/NaOH aqueous solution and ECSTM tips coated with paraffin. By limiting the elec
Resumo:
The analytical expressions of quasi-first and second order homogeneous catalytic reactions with different diffusion coefficients at ultramicrodisk electrodes under steady state conditions are obtained by using the reaction layer concept. The method of treatment is simple and its physical meaning is clear. The relationship between the diffusion layer, reaction layer, the electrode dimension and the kinetic rate constant at an ultramicroelectrode is discussed and the factor effect on the reaction order is described. The order of a catalytic reaction at an ultramicroelectrode under steady state conditions is related not only to C(Z)*/C(O)* but also to the kinetic rate constant and the dimension of the ultramicroelectrode; thus the order of reaction can be controlled by the dimension of the ultramicroelectrode. The steady state voltammetry of the ultramicroelectrode is one of the most simple methods available to study the kinetics of fast catalytic reactions.
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
1. The prediction and mapping of climate in areas between climate stations is of increasing importance in ecology.
2. Four categories of model, simple interpolation, thin plate splines, multiple linear regression and mixed spline-regression, were tested for their ability to predict the spatial distribution of temperature on the British mainland. The models were tested by external cross-verification.
3. The British distribution of mean daily temperature was predicted with the greatest accuracy by using a mixed model: a thin plate spline fitted to the surface of the country, after correction of the data by a selection from 16 independent topographical variables (such as altitude, distance from the sea, slope and topographic roughness), chosen by multiple regression from a digital terrain model (DTM) of the country.
4. The next most accurate method was a pure multiple regression model using the DTM. Both regression and thin plate spline models based on a few variables (latitude, longitude and altitude) only were comparatively unsatisfactory, but some rather simple methods of surface interpolation (such as bilinear interpolation after correction to sea level) gave moderately satisfactory results. Differences between the methods seemed to be dependent largely on their ability to model the effect of the sea on land temperatures.
5. Prediction of temperature by the best methods was greater than 95% accurate in all months of the year, as shown by the correlation between the predicted and actual values. The predicted temperatures were calculated at real altitudes, not subject to sea-level correction.
6. A minimum of just over 30 temperature recording stations would generate a satisfactory surface, provided the stations were well spaced.
7. Maps of mean daily temperature, using the best overall methods are provided; further important variables, such as continentality and length of growing season, were also mapped. Many of these are believed to be the first detailed representations at real altitude.
8. The interpolated monthly temperature surfaces are available on disk.
Resumo:
Desde o início da utilização da imunohistoquímica em anatomia patológica, um dos objetivos tem sido detetar as quantidades mais ínfimas de antigénio, tornando-o visível ao microscópio ótico. Vários sistemas de amplificação têm sido aplicados de forma a concretizar este objetivo, tendo surgido um grupo genérico de métodos simples e que apresentam uma amplificação superior: são os denominados métodos do polímero indireto. Tendo em conta a variedade de métodos disponíveis, o autor propõe-se a comparar a qualidade de quatro sistemas de amplificação, que recorrem ao método do polímero indireto com horseradish peroxidase (HRP). Foram utilizadas lâminas de diferentes tecidos, fixados em formol e incluídos em parafina, nos quais se procedeu à identificação de 15 antigénios distintos. Na amplificação recorreu-se a quatro sistemas de polímero indireto (Dako EnVision+ System – K4006; LabVision UltraVision LP Detection System – TL-004-HD; Leica NovoLink – RE7140-k; Vector ImmPRESS Reagent Kit – MP-7402). A observação microscópica e classificação da imunomarcação obtida foram feitas com base num algoritmo que enquadra intensidade, marcação específica, marcação inespecífica e contraste, num score global que pode tomar valores entre 0 e 25. No tratamento dos dados, para além da estatística descritiva, foi utilizado o teste one-way ANOVA com posthoc de tukey (alfa=0.05). O melhor resultado obtido, em termos de par média/desvio-padrão, dos scores globais foi o do NovoLink (22,4/2,37) e o pior foi o do EnVision+ (17,43/3,86). Verificou-se ainda que existe diferença estatística entre os resultados obtidos pelo sistema NovoLink e os sistemas UltraVision (p=.004), ImmPRESS (p=.000) e EnVision+ (p=.000). Concluiu-se que o sistema que permitiu a obtenção de melhores resultados, neste estudo, foi o Leica NovoLink.
Resumo:
Este trabajo recopila literatura académica relevante sobre estrategias de entrada y metodologías para la toma de decisión sobre la contratación de servicios de Outsourcing para el caso de empresas que planean expandirse hacia mercados extranjeros. La manera en que una empresa planifica su entrada a un mercado extranjero, y realiza la consideración y evaluación de información relevante y el diseño de la estrategia, determina el éxito o no de la misma. De otro lado, las metodologías consideradas se concentran en el nivel estratégico de la pirámide organizacional. Se parte de métodos simples para llegar a aquellos basados en la Teoría de Decisión Multicriterio, tanto individuales como híbridos. Finalmente, se presenta la Dinámica de Sistemas como herramienta valiosa en el proceso, por cuanto puede combinarse con métodos multicriterio.
Resumo:
What constitutes a baseline level of success for protein fold recognition methods? As fold recognition benchmarks are often presented without any thought to the results that might be expected from a purely random set of predictions, an analysis of fold recognition baselines is long overdue. Given varying amounts of basic information about a protein—ranging from the length of the sequence to a knowledge of its secondary structure—to what extent can the fold be determined by intelligent guesswork? Can simple methods that make use of secondary structure information assign folds more accurately than purely random methods and could these methods be used to construct viable hierarchical classifications?