492 resultados para Apache Cordova


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With this is bound Tschudi, Johann Jakob von. Reise durch die Andes von Süd-Amerika, von Cordova nach Cobija im jahre 1858; Barth, Heinrich. Dr. H. Barth's reise von Trapezunt durch die nördliche hälfte Klein-Asiens nach Scutari im herbst 1858; Lejean Guillame. Ethnographie de la Turquie d'Europe; Wagner Moritz. Beiträge zu einer physisch-geographischen skizze des Isthmus von Panana; Hassenstein, Bruno. Ost-Africa zwischen Chartum und dem Rothen meere bis Suakin und Massaua.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Printed at the Riverside Press, Cambridge, Mass.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The land of poco tiempo [New Mexico]--"Lo" who is not poor.--The city in the sky.--The Penitent brothers.--The chase of the chongo.--The wanderings of Cochiti.--The Apache warrior.--On the trail of the renegades.--New Mexican folk-songs.--A day of the saints.--The cities that were forgotten.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 62P10.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research study was to determine if the Advanced Placement program as it is recognized by the universities in the Florida State University System (SUS) truly serves as an acceleration mechanism for those students who enter an SUS institution with passing AP scores. Despite mandates which attempt to control uniformity of policy, each public university in Florida determines which courses will be exempted and the number of credits they will grant for passing Advanced Placement courses.^ This is a descriptive study in which the AP policies of each of the SUS institutions were compared. Additionally, the college attendance and graduation data on members of a cohort of 593 Broward County high school graduates of the class of June, 1992 were compared. Approximately 28% of the cohort members entered university with passing Advanced Placement scores.^ The rate of early and on time graduation was significantly dependent on the Advanced Placement standing of the students in the cohort. Given the financial and human cost involved, it is recommended that all state universities bring their Advanced Placement policies into line with each other and implement a uniform Advanced Placement policy. It is also recommended that a follow-up study be conducted with a new cohort bound under the current 120 credit limitation for graduation. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nell’ultima decade abbiamo assistito alla transizione di buona parte dei business da offline ad online. Istantaneamente grazie al nuovo rapporto tra azienda e cliente fornito dalla tecnologia, molti dei metodi di marketing potevano essere rivoluzionati. Il web ci ha abilitato all’analisi degli utenti e delle loro opinioni ad ampio spettro. Capire con assoluta precisione il tasso di conversione in acquisti degli utenti fornito dalle piattaforme pubblicitarie e seguirne il loro comportamento su larga scala sul web, operazione sempre stata estremamente difficile da fare nel mondo reale. Per svolgere queste operazioni sono disponibili diverse applicazioni commerciali, che comportano un costo che può essere notevole da sostenere per le aziende. Nel corso della seguente tesi si punta a fornire una analisi di una piattaforma open source per la raccolta dei dati dal web in un database strutturato

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi tratta di un web service capace di estrapolare i dati da un database Oracle utilizzato da un gestionale già esistente, per darli come output ad un portale da cui il cittadino si potrà collegare, visionandoli. Il gestionale si occupa della fase coattiva, ovvero quando un contribuente non paga una multa o una qualsiasi tassa; fino ad oggi non era possibile far vedere in via telematica i dati della situazione coattiva di un contribuente. Grazie al web service da me creato un portale potrà fare vedere i dati al contribuente tramite esso. Inoltre grazie a questo web service in futuro si potranno collegare anche con altri portali o app smartphone per dare ulteriori servizi al cittadino.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este Trabajo de Fin de Máster se desarrollará un sistema de detección de fraude en pagos con tarjeta de crédito en tiempo real utilizando tecnologías de procesamiento distribuido. Concretamente se considerarán dos tecnologías: TIBCO, un conjunto de herramientas comerciales diseñadas para el procesamiento de eventos complejos, y Apache Spark, un sistema abierto para el procesamiento de datos en tiempo real. Además de implementar el sistema utilizando las dos tecnologías propuestas, un objetivo, otro objetivo de este Trabajo de Fin de Máster consiste en analizar y comparar estos dos sistemas implementados usados para procesamiento en tiempo real. Para la detección de fraude en pagos con tarjeta de crédito se aplicarán técnicas de aprendizaje máquina, concretamente del campo de anomaly/outlier detection. Como fuentes de datos que alimenten los sistemas, haremos uso de tecnologías de colas de mensajes como TIBCO EMS y Kafka. Los datos generados son enviados a estas colas para que los respectivos sistemas puedan procesarlos y aplicar el algoritmo de aprendizaje máquina, determinando si una nueva instancia es fraude o no. Ambos sistemas hacen uso de una base de datos MongoDB para almacenar los datos generados de forma pseudoaleatoria por los generadores de mensajes, correspondientes a movimientos de tarjetas de crédito. Estos movimientos posteriormente serán usados como conjunto de entrenamiento para el algoritmo de aprendizaje máquina.