957 resultados para SQL Query generation from examples
Resumo:
We consider return-to-zero (RZ) pulses with random phase modulation propagating in a nonlinear channel (modelled by the integrable nonlinear Schrödinger equation, NLSE). We suggest two different models for the phase fluctuations of the optical field: (i) Gaussian short-correlated fluctuations and (ii) generalized telegraph process. Using the rectangular-shaped pulse form we demonstrate that the presence of phase fluctuations of both types strongly influences the number of solitons generated in the channel. It is also shown that increasing the correlation time for the random phase fluctuations affects the coherent content of a pulse in a non-trivial way. The result obtained has potential consequences for all-optical processing and design of optical decision elements.
Resumo:
Generation of picosecond pulses with a peak power in excess of 7W and a duration of 24ps from a gain-switched InGaN diode laser is demonstrated for the first time.
Resumo:
This thesis presents a detailed, experiment-based study of generation of ultrashort optical pulses from diode lasers. Simple and cost-effective techniques were used to generate high power, high quality optical short pulses at various wavelength windows. The major achievements presented in the thesis is summarised as follows. High power pulses generation is one of the major topics discussed in the thesis. Although gain switching is the simplest way for ultrashort pulse generation, it proves to be quite effective to deliver high energy pulses on condition that the pumping pulses with extremely fast rising time and high enough amplitude are applied on specially designed pulse generators. In the experiment on a grating-coupled surface emitting laser (GCSEL), peak power as high as 1W was achieved even when its spectral bandwidth was controlled within 0.2nm. Another experiment shows violet picosecond pulses with peak power as high as 7W was achieved when the intensive electrical pulses were applied on optimised DC bias to pump on InGaN violet diode laser. The physical mechanism of this phenomenon, as we considered, may attributed to the self-organised quantum dots structure in the laser. Control of pulse quality, including spectral quality and temporal profile, is an important issue for high power pulse generation. The ways to control pulse quality described in the thesis are also based on simple and effective techniques. For instance, GCSEL used in our experiment has a specially designed air-grating structure for out-coupling of optical signals; hence, a tiny flat aluminium mirror was placed closed to the grating section and resulted in a wavelength tuning range over 100nm and the best side band suppression ratio of 40dB. Self-seeding, as an effective technique for spectral control of pulsed lasers, was demonstrated for the first time in a violet diode laser. In addition, control of temporal profile of the pulse is demonstrated in an overdriven DFB laser. Wavelength tuneable fibre Bragg gratings were used to tailor the huge energy tail of the high power pulse. The whole system was compact and robust. The ultimate purpose of our study is to design a new family of compact ultrafast diode lasers. Some practical ideas of laser design based on gain-switched and Q-switched devices are also provided in the end.
Resumo:
This letter compares two nonlinear media for simultaneous carrier recovery and generation of frequency symmetric signals from a 42.7-Gb/s nonreturn-to-zero binary phase-shift-keyed input by exploiting four-wave mixing in a semiconductor optical amplifier and a highly nonlinear optical fiber for use in a phase-sensitive amplifier.
Resumo:
A compact all-room-temperature frequency-doubling scheme generating cw orange light with a periodically poled potassium titanyl phosphate waveguide and a quantum-dot external cavity diode laser is demonstrated. A frequency-doubled power of up to 4.3 mW at the wavelength of 612.9 nm with a conversion efficiency exceeding 10% is reported. Second harmonic wavelength tuning between 612.9 nm and 616.3 nm by changing the temperature of the crystal is also demonstrated. © Springer-Verlag 2010.
Resumo:
This paper presents the current status of our research in mode-locked quantum-dot edge-emitting laser diodes, particularly highlighting the recent progress in spectral and temporal versatility of both monolithic and external-cavity laser configurations. Spectral versatility is demonstrated through broadband tunability and novel mode-locking regimes that involve distinct spectral bands, such as dual-wavelength mode-locking, and robust high-power wavelength bistability. Broad tunability of the pulse repetition rate is also demonstrated for an external-cavity mode-locked quantum-dot laser, revealing a nearly constant pulse peak power at different pulse repetition rates. High-energy and low-noise pulse generations are demonstrated for low-pulse repetition rates. These recent advances confirm the potential of quantum-dot lasers as versatile, compact, and low-cost sources of ultrashort pulses. © 2011 IEEE.
Resumo:
Self-seeded, gain-switched operation of an InGaN multi-quantum-well diode laser is reported for the first time. Narrow-line, wavelength-tunable, picosecond pulses have been generated from a standard, uncoated diode laser in an external cavity.
Resumo:
We present a compact, all-room-temperature continuous-wave laser source in the visible spectral region between 574 and 647 nm by frequency doubling of a broadly tunable InAs/GaAs quantum-dot external-cavity diode laser in a periodically poled potassium titanyl phosphate crystal containing three waveguides with different cross-sectional areas (4 × 4, 3 × 5, and 2 μm × 6 μm). The influence of a waveguide's design on tunability, output power, and mode distribution of second-harmonic generated light, as well as possibilities to increase the conversion efficiency via an optimization of a waveguide's cross-sectional area, was systematically investigated. A maximum output power of 12.04 mW with a conversion efficiency of 10.29% at 605.6 nm was demonstrated in the wider waveguide with the cross-sectional area of 4 μm × 4 μm.
Resumo:
An approach is proposed for inferring implicative logical rules from examples. The concept of a good diagnostic test for a given set of positive examples lies in the basis of this approach. The process of inferring good diagnostic tests is considered as a process of inductive common sense reasoning. The incremental approach to learning algorithms is implemented in an algorithm DIAGaRa for inferring implicative rules from examples.
Dark soliton generation from semiconductor optical amplifier gain medium in ring fiber configuration
Resumo:
We have investigated the mode-lock operation from a semiconductor optical amplifier (SOA) gain chip in the ring fibre configuration. At lower pump currents, the laser generates dark soliton pulses both at the fundamental repetition rate of 39 MHz and supports up to the 6th harmonic order corresponding to 234-MHz repetition rate with an output power of ∼2.1 mW. At higher pump currents, the laser can be switched between the bright, dark and concurrent bright and dark soliton generation regimes.
Resumo:
RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.
Resumo:
In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.