777 resultados para Content-based filtering
Resumo:
A high frequency sensing interrogation system by using fiber Bragg grating based microwave photonic filtering is proposed, in which the wavelength measurement sensitivity is proportional to the RF modulation frequency applied to the optical signal.
Resumo:
A novel high-frequency fiber Bragg grating (FBG) sensing interrogation system by using fiber Sagnac-loop-based microwave photonic filtering is proposed and experimentally demonstrated. By adopting the microwave photonic filtering, the wavelength shift of sensing FBG can be converted into amplitude variation of the modulated electronic radio-frequency (RF) signal. In the experiment, the strain applied onto the sensing FBG has been demodulated by measuring the intensity of the recovered RF signal, and by modulating the RF signal with different frequencies, different interrogation sensitivities can be achieved.
Resumo:
In this chapter we present the relevant mathematical background to address two well defined signal and image processing problems. Namely, the problem of structured noise filtering and the problem of interpolation of missing data. The former is addressed by recourse to oblique projection based techniques whilst the latter, which can be considered equivalent to impulsive noise filtering, is tackled by appropriate interpolation methods.
Resumo:
A high frequency sensing interrogation system by using fiber Bragg grating based microwave photonic filtering is proposed, in which the wavelength measurement sensitivity is proportional to the RF modulation frequency applied to the optical signal.
Resumo:
Logic based Pattern Recognition extends the well known similarity models, where the distance measure is the base instrument for recognition. Initial part (1) of current publication in iTECH-06 reduces the logic based recognition models to the reduced disjunctive normal forms of partially defined Boolean functions. This step appears as a way to alternative pattern recognition instruments through combining metric and logic hypotheses and features, leading to studies of logic forms, hypotheses, hierarchies of hypotheses and effective algorithmic solutions. Current part (2) provides probabilistic conclusions on effective recognition by logic means in a model environment of binary attributes.
Resumo:
Electronic publishing exploits numerous possibilities to present or exchange information and to communicate via most current media like the Internet. By utilizing modern Web technologies like Web Services, loosely coupled services, and peer-to-peer networks we describe the integration of an intelligent business news presentation and distribution network. Employing semantics technologies enables the coupling of multinational and multilingual business news data on a scalable international level and thus introduce a service quality that is not achieved by alternative technologies in the news distribution area so far. Architecturally, we identified the loose coupling of existing services as the most feasible way to address multinational and multilingual news presentation and distribution networks. Furthermore we semantically enrich multinational news contents by relating them using AI techniques like the Vector Space Model. Summarizing our experiences we describe the technical integration of semantics and communication technologies in order to create a modern international news network.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
This thesis objective is to discover “How are informal decisions reached by screeners when filtering out undesirable job applications?” Grounded theory techniques were employed in the field to observe and analyse informal decisions at the source by screeners in three distinct empirical studies. Whilst grounded theory provided the method for case and cross-case analysis, literature from academic and non-academic sources was evaluated and integrated to strengthen this research and create a foundation for understanding informal decisions. As informal decisions in early hiring processes have been under researched, this thesis contributes to current knowledge in several ways. First, it locates the Cycle of Employment which enhances Robertson and Smith’s (1993) Selection Paradigm through the integration of stages that individuals occupy whilst seeking employment. Secondly, a general depiction of the Workflow of General Hiring Processes provides a template for practitioners to map and further develop their organisational processes. Finally, it highlights the emergence of the Locality Effect, which is a geographically driven heuristic and bias that can significantly impact recruitment and informal decisions. Although screeners make informal decisions using multiple variables, informal decisions are made in stages as evidence in the Cycle of Employment. Moreover, informal decisions can be erroneous as a result of a majority and minority influence, the weighting of information, the injection of inappropriate information and criteria, and the influence of an assessor. This thesis considers these faults and develops a basic framework of understanding informal decisions to which future research can be launched.
Resumo:
ACM Computing Classification System (1998): K.3.1, K.3.2.
Resumo:
Existe una cantidad enorme de información en Internet acerca de incontables temas, y cada día esta información se expande más y más. En teoría, los programas informáticos podrían beneficiarse de esta gran cantidad de información disponible para establecer nuevas conexiones entre conceptos, pero esta información a menudo aparece en formatos no estructurados como texto en lenguaje natural. Por esta razón, es muy importante conseguir obtener automáticamente información de fuentes de diferentes tipos, procesarla, filtrarla y enriquecerla, para lograr maximizar el conocimiento que podemos obtener de Internet. Este proyecto consta de dos partes diferentes. En la primera se explora el filtrado de información. La entrada del sistema consiste en una serie de tripletas proporcionadas por la Universidad de Coimbra (ellos obtuvieron las tripletas mediante un proceso de extracción de información a partir de texto en lenguaje natural). Sin embargo, debido a la complejidad de la tarea de extracción, algunas de las tripletas son de dudosa calidad y necesitan pasar por un proceso de filtrado. Dadas estas tripletas acerca de un tema concreto, la entrada será estudiada para averiguar qué información es relevante al tema y qué información debe ser descartada. Para ello, la entrada será comparada con una fuente de conocimiento online. En la segunda parte de este proyecto, se explora el enriquecimiento de información. Se emplean diferentes fuentes de texto online escritas en lenguaje natural (en inglés) y se extrae información de ellas que pueda ser relevante al tema especificado. Algunas de estas fuentes de conocimiento están escritas en inglés común, y otras están escritas en inglés simple, un subconjunto controlado del lenguaje que consta de vocabulario reducido y estructuras sintácticas más simples. Se estudia cómo esto afecta a la calidad de las tripletas extraídas, y si la información obtenida de fuentes escritas en inglés simple es de una calidad superior a aquella extraída de fuentes en inglés común.
Resumo:
New information on possible resource value of sea floor manganese nodule deposits in the eastern north Pacific has been obtained by a study of records and collections of the 1972 Sea Scope Expedition. Nodule abundance (percent of sea floor covered) varies greatly, according to photographs from eight stations and data from other sources. All estimates considered reliable are plotted on a map of the region. Similar maps show the average content of Ni, Cu, Mn and Co at 89 stations from which three or more nodules were analyzed. Variations in nodule metal content at each station are shown graphically in an appendix, where data on nodule sizes are also given. Results of new analyses of 420 nodules from 93 stations for mn, fe, ni, cu, CO, and zn are listed in another appendix. Relatively high Ni + Cu content is restricted chiefly to four groups of stations in the equatorial region, where group averages are 1.86, 1.99, 2.47, and 2.55 weight-percent. Prepared for United States Department of the Interior, Bureau of Mines. Grant no. GO284008-02-MAS. - NTIS PB82-142571.
Resumo:
The In Situ Analysis System (ISAS) was developed to produce gridded fields of temperature and salinity that preserve as much as possible the time and space sampling capabilities of the Argo network of profiling floats. Since the first global re-analysis performed in 2009, the system has evolved and a careful delayed mode processing of the 2002-2012 dataset has been carried out using version 6 of ISAS and updating the statistics to produce the ISAS13 analysis. This last version is now implemented as the operational analysis tool at the Coriolis data centre. The robustness of the results with respect to the system evolution is explored through global quantities of climatological interest: the Ocean Heat Content and the Steric Height. Estimates of errors consistent with the methodology are computed. This study shows that building reliable statistics on the fields is fundamental to improve the monthly estimates and to determine the absolute error bars. The new mean fields and variances deduced from the ISAS13 re-analysis and dataset show significant changes relative to the previous ISAS estimates, in particular in the southern ocean, justifying the iterative procedure. During the decade covered by Argo, the intermediate waters appear warmer and saltier in the North Atlantic and fresher in the Southern Ocean than in WOA05 long term mean. At inter-annual scale, the impact of ENSO on the Ocean Heat Content and Steric Height is observed during the 2006-2007 and 2009-2010 events captured by the network.