911 resultados para automatic indexing
Resumo:
With the advent of Internet-based technologies for information organization, many groups have constructed their own indexing languages. Biologists, Library and Information Science practitioners, and now social taggers have worked together to create large and many times complex indexing languages. In this environment of diversity, two questions surface: (1) what are the measurable characteristics of these indexing languages, and (2) do measurements of these indexing languages speciate along these characteristics? This poster presents data from this exploratory work.
Resumo:
This paper proposes a dual conception of work in knowledge organization. The first part is a conception of work as liminal, set apart from everyday work. The second is integrated, without separation. This talk is the beginning of a larger project where we will characterize work in knowledge organization, both as it is set out in our literature (Šauperl, 2004; Hjørland 2003 Wilson, 1968), and in a philosophical argument for its fundamental importance in the activities of society (Shera, 1972; Zandonade, 2004).But in order to do this, we will co-opt the conception of liminality from the anthropology of religion (Turner, 1967), and Zen Buddhist conceptions of moral action, intention, and integration (Harvey, 2000 and cf., Harada, S., 2008).The goal for this talk is to identify the acts repeated (form) and the purpose of those acts (intention), in knowledge organization, with specific regard to thresholds (liminal points) of intention present in those acts.We can then ask the questions: Where is intention in knowledge organization liminal and where is it integrated? What are the limits of knowledge organization work when considered at a foundational level of the intention labor practices? Answering such questions, in this context, allows us to reconsider the assumptions we have about knowledge organization work and its increasingly important role in society. As a consequence, we can consider the limits of classification research if we see the foundations of knowledge organization work when we see forms and intentions. I must also say that incorporating Zen Buddhist philosophy into knowledge organization research seems like it fits well with ethics and ethical responses the practice of knowledge organization. This is because 20th Century Western interpretations of Zen are often rooted in ethical considerations. This translates easily to work.
Resumo:
Regarding canal management modernization, water savings and water delivery quality, the study presents two automatic canal control approaches of the PI (Proportional and Integral) type: the distant and the local downstream control modes. The two PI controllers are defined, tuned and tested using an hydraulic unsteady flow simulation model, particularly suitable for canal control studies. The PI control parameters are tuned using optimization tools. The simulations are done for a Portuguese prototype canal and the PI controllers are analyzed and compared considering a demand-oriented-canal operation. The paper presents and analyzes the two control modes answers for five different offtake types – gate controlled weir, gate controlled orifice, weir with or without adjustable height and automatic flow adjustable offtake. The simulation results are compared using water volumes performance indicators (considering the demanded, supplied and the effectives water volumes) and a time indicator, defined taking into account the time during which the demand discharges are effective discharges. Regarding water savings, the simulation results for the five offtake types prove that the local downstream control gives the best results (no water operational losses) and that the distant downstream control presents worse results in connection with the automatic flow adjustable offtakes. Considering the water volumes and time performance indicators, the best results are obtained for the automatic flow adjustable offtakes and the worse for the gate controlled orifices, followed by the weir with adjustable height.
Resumo:
This paper describes a method to automatically obtain, from a set of impedance measurements at different frequencies, an equivalent circuit composed of lumped elements based on the vector fitting algorithm. The method starts from the impedance measurement of the circuit and then, through the recursive use of vector fitting, identifies the circuit topology and the component values of lumped elements. The method can be expanded to include other components usually used in impedance spectroscopy. The method is firstly described and then two examples highlight the robustness of the method and showcase its applicability.
Resumo:
Clouds are important in weather prediction, climate studies and aviation safety. Important parameters include cloud height, type and cover percentage. In this paper, the recent improvements in the development of a low-cost cloud height measurement setup are described. It is based on stereo vision with consumer digital cameras. The cameras positioning is calibrated using the position of stars in the night sky. An experimental uncertainty analysis of the calibration parameters is performed. Cloud height measurement results are presented and compared with LIDAR measurements.
Resumo:
This paper describes our semi-automatic keyword based approach for the four topics of Information Extraction from Microblogs Posted during Disasters task at Forum for Information Retrieval Evaluation (FIRE) 2016. The approach consists three phases.
Resumo:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Resumo:
Collecting and analysing data is an important element in any field of human activity and research. Even in sports, collecting and analyzing statistical data is attracting a growing interest. Some exemplar use cases are: improvement of technical/tactical aspects for team coaches, definition of game strategies based on the opposite team play or evaluation of the performance of players. Other advantages are related to taking more precise and impartial judgment in referee decisions: a wrong decision can change the outcomes of important matches. Finally, it can be useful to provide better representations and graphic effects that make the game more engaging for the audience during the match. Nowadays it is possible to delegate this type of task to automatic software systems that can use cameras or even hardware sensors to collect images or data and process them. One of the most efficient methods to collect data is to process the video images of the sporting event through mixed techniques concerning machine learning applied to computer vision. As in other domains in which computer vision can be applied, the main tasks in sports are related to object detection, player tracking, and to the pose estimation of athletes. The goal of the present thesis is to apply different models of CNNs to analyze volleyball matches. Starting from video frames of a volleyball match, we reproduce a bird's eye view of the playing court where all the players are projected, reporting also for each player the type of action she/he is performing.
Resumo:
The inferior alveolar nerve (IAN) lies within the mandibular canal, named inferior alveolar canal in literature. The detection of this nerve is important during maxillofacial surgeries or for creating dental implants. The poor quality of cone-beam computed tomography (CBCT) and computed tomography (CT) scans and/or bone gaps within the mandible increase the difficulty of this task, posing a challenge to human experts who are going to manually detect it and resulting in a time-consuming task.Therefore this thesis investigates two methods to automatically detect the IAN: a non-data driven technique and a deep-learning method. The latter tracks the IAN position at each frame leveraging detections obtained with the deep neural network CenterNet, fined-tuned for our task, and temporal and spatial information.
Resumo:
Elaborate presents automated guided vehicle state-of-art, describing AGVs' types and employed technologies. AGVs' applications is going to be exposed by means of performed work during Toyota's internship. It will be presented the acquired experience on automatic forklifts' implementation and tools employed in a realization of an AGV system. Morover, it will be presented the development of a python program able to retrieve data, stored in a database, and elaborate them to produce heatmaps on vehicles' errors. The said program has been tested live on customer's sites and obtained result will be explained. Finally, it is going to be presented the analysis on natural navigation technology applied to Toyota's AGVs. Tests on natural navigation have been run in warehouses to estimate capabilities and possible application in logistic field.
Resumo:
The purpose of this thesis is to present the concept of simulation for automatic machines and how it might be used to test and debug software implemented for an automatic machine. The simulation is used to detect errors and allow corrections of the code before the machine has been built. Simulation permits testing different solutions and improving the software to get an optimized one. Additionally, simulation can be used to keep track of a machine after the installation in order to improve the production process during the machine’s life cycle. The central argument of this project is discussing the advantage of using virtual commissioning to test the implemented software in a virtual environment. Such an environment is getting benefit in avoiding potential damages as well as reduction of time to have the machine ready to work. Also, the use of virtual commissioning allows testing different solutions without high losses of time and money. Subsequently, an optimized solution could be found after testing different proposed solutions. The software implemented is based on the Object-Oriented Programming paradigm which implies different features such as encapsulation, modularity, and reusability of the code. Therefore, this way of programming helps to get simplified code that is easier to be understood and debugged as well as its high efficiency. Finally, different communication protocols are implemented in order to allow communication between the real plant and the simulation model. By the outcome that this communication provides, we might be able to gather all the necessary data for the simulation and the analysis, in real-time, of the production process in a way to improve it during the machine life cycle.
Resumo:
Within the classification of orbits in axisymmetric stellar systems, we present a new algorithm able to automatically classify the orbits according to their nature. The algorithm involves the application of the correlation integral method to the surface of section of the orbit; fitting the cumulative distribution function built with the consequents in the surface of section of the orbit, we can obtain the value of its logarithmic slope m which is directly related to the orbit’s nature: for slopes m ≈ 1 we expect the orbit to be regular, for slopes m ≈ 2 we expect it to be chaotic. With this method we have a fast and reliable way to classify orbits and, furthermore, we provide an analytical expression of the probability that an orbit is regular or chaotic given the logarithmic slope m of its correlation integral. Although this method works statistically well, the underlying algorithm can fail in some cases, misclassifying individual orbits under some peculiar circumstances. The performance of the algorithm benefits from a rich sampling of the traces of the SoS, which can be obtained with long numerical integration of orbits. Finally we note that the algorithm does not differentiate between the subtypes of regular orbits: resonantly trapped and untrapped orbits. Such distinction would be a useful feature, which we leave for future work. Since the result of the analysis is a probability linked to a Gaussian distribution, for the very definition of distribution, some orbits even if they have a certain nature are classified as belonging to the opposite class and create the probabilistic tails of the distribution. So while the method produces fair statistical results, it lacks in absolute classification precision.
Resumo:
Combinatorial decision and optimization problems belong to numerous applications, such as logistics and scheduling, and can be solved with various approaches. Boolean Satisfiability and Constraint Programming solvers are some of the most used ones and their performance is significantly influenced by the model chosen to represent a given problem. This has led to the study of model reformulation methods, one of which is tabulation, that consists in rewriting the expression of a constraint in terms of a table constraint. To apply it, one should identify which constraints can help and which can hinder the solving process. So far this has been performed by hand, for example in MiniZinc, or automatically with manually designed heuristics, in Savile Row. Though, it has been shown that the performances of these heuristics differ across problems and solvers, in some cases helping and in others hindering the solving procedure. However, recent works in the field of combinatorial optimization have shown that Machine Learning (ML) can be increasingly useful in the model reformulation steps. This thesis aims to design a ML approach to identify the instances for which Savile Row’s heuristics should be activated. Additionally, it is possible that the heuristics miss some good tabulation opportunities, so we perform an exploratory analysis for the creation of a ML classifier able to predict whether or not a constraint should be tabulated. The results reached towards the first goal show that a random forest classifier leads to an increase in the performances of 4 different solvers. The experimental results in the second task show that a ML approach could improve the performance of a solver for some problem classes.
Resumo:
Electric vehicles and electronic components inside the vehicle are becoming increasingly important. The software as well starts to have a significant impact on modern high-end cars therefore a careful validation process needs to be implemented with the aim of having a bug free product when it is released. The software complexity increases and thus also the testing phases is more demanding. Test can be troublesome and, in some cases, boring and easy. The intelligence can be moved in test definition and writing rather than on test execution. The aim of this document is to start the definition of an automatic modular testing system capable to execute test cycles on systems that interacts with the CAN networks and with DUT that can be touched with a robotic arm. The document defines a first version of the system, in particular the hardware interface part with the aim of taking logs and execute test in an automated fashion with the test engineer can have a higher focus on the test definition and analysis rather than execution.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.