954 resultados para platform, independent, mobile, Sencha, touch, MVC, pattern, JavaScript
Resumo:
This thesis is done as a complementary part for the active magnet bearing (AMB) control software development project in Lappeenranta University of Technology. The main focus of the thesis is to examine an idea of a real-time operating system (RTOS) framework that operates in a dedicated digital signal processor (DSP) environment. General use real-time operating systems do not necessarily provide sufficient platform for periodic control algorithm utilisation. In addition, application program interfaces found in real-time operating systems are commonly non-existent or provided as chip-support libraries, thus hindering platform independent software development. Hence, two divergent real-time operating systems and additional periodic extension software with the framework design are examined to find solutions for the research problems. The research is discharged by; tracing the selected real-time operating system, formulating requirements for the system, and designing the real-time operating system framework (OSFW). The OSFW is formed by programming the framework and conjoining the outcome with the RTOS and the periodic extension. The system is tested and functionality of the software is evaluated in theoretical context of the Rate Monotonic Scheduling (RMS) theory. The performance of the OSFW and substance of the approach are discussed in contrast to the research theme. The findings of the thesis demonstrates that the forged real-time operating system framework is a viable groundwork solution for periodic control applications.
Resumo:
Palvelukeskeinen arkkitehtuuri on uusi tapa rakentaa tietojärjestelmiä. Se perustuu siihen, että logiikasta koostetaan yleiskäyttöisiä palveluita, joita tarjotaan muiden järjestelmän osien käyttöön. Tällöin samoja asioita ei tarvitse toteuttaa moneen kertaan ja järjestelmää voidaan hyödyntää tehokkaasti ja monipuolisesti. Näiden palveluiden hallinnassa voidaan hyödyntää palveluväyliä, eli ESB -tuotteita. Palveluväylät sisältävät erilaisia mekanismeja, joiden avulla palveluihin liittyvää viestiliikennettä voidaan reitittää, muokata ja valvoa eri tavoin. Nykyisissä palvelukeskeisissä toteutuksissa käytetään usein XML -kieleen pohjautuvia Web Service -määrityksiä. Ne tarjoavat ympäristöriippumattoman pohjan, joka täyttää suoraan useita palvelukeskeisen arkkitehtuurin vaatimuksia. Määritysten ympärille on myös paljon valmiita laajennuksia, joiden avulla palveluihin voidaan liittää lisätoiminnallisuutta. Lahden kaupunki lähti Fenix -projektin yhteydessä kehittämään uutta kuntien käyttöön soveltuvaa järjestelmää, joka hyödyntää palvelukeskeisen arkkitehtuurin periaatteita. Järjestelmä jaettiin selkeisiin kerroksiin siten, että käyttöliittymä erotettiin palvelulogiikoista palveluväylän avulla. Tällöin järjestelmä saatiin jaettua loogisiin kokonaisuuksiin, joilla on selkeä rooli. Taustapalvelut hoitavat käsitteiden hallinnan, sekä niihin liittyvät liiketoimintasäännöt. Käyttöliittymäkerros hoitaa tiedon esittämisen ja tarjoaa graafisen, selainpohjaisen käyttöliittymän palveluihin. Palveluväylä hoitaa liikenteen reitittämisen, sekä huolehtii palveluihin liittyvistä käyttöoikeuksista ja tilastoinnista. Lopputuloksena on loputtomiin laajennettavissa oleva järjestelmä, jonka päälle voidaan kehittää erilaisia sähköisiä palveluita kunnan ja sen asukkaiden välille.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.
Resumo:
Dynamic logic is an extension of modal logic originally intended for reasoning about computer programs. The method of proving correctness of properties of a computer program using the well-known Hoare Logic can be implemented by utilizing the robustness of dynamic logic. For a very broad range of languages and applications in program veri cation, a theorem prover named KIV (Karlsruhe Interactive Veri er) Theorem Prover has already been developed. But a high degree of automation and its complexity make it di cult to use it for educational purposes. My research work is motivated towards the design and implementation of a similar interactive theorem prover with educational use as its main design criteria. As the key purpose of this system is to serve as an educational tool, it is a self-explanatory system that explains every step of creating a derivation, i.e., proving a theorem. This deductive system is implemented in the platform-independent programming language Java. In addition, a very popular combination of a lexical analyzer generator, JFlex, and the parser generator BYacc/J for parsing formulas and programs has been used.
Resumo:
This exhibition brings together material from the first decade of platform-independent design. It introduces the mature proprietary digital technology that existed just before 1985, and presents artefacts that represent key chapters in the transition to platform-independent digital typefaces, in combination with digital tools for page layout. The exhibition includes the first issues of the key journals of the period, which both represented the new approaches, and offered critique for the impact of the new technologies on typographic design.
Resumo:
E-Science experiments typically involve many distributed services maintained by different organisations. After an experiment has been executed, it is useful for a scientist to verify that the execution was performed correctly or is compatible with some existing experimental criteria or standards, not necessarily anticipated prior to execution. Scientists may also want to review and verify experiments performed by their colleagues. There are no existing frameworks for validating such experiments in today's e-Science systems. Users therefore have to rely on error checking performed by the services, or adopt other ad hoc methods. This paper introduces a platform-independent framework for validating workflow executions. The validation relies on reasoning over the documented provenance of experiment results and semantic descriptions of services advertised in a registry. This validation process ensures experiments are performed correctly, and thus results generated are meaningful. The framework is tested in a bioinformatics application that performs protein compressibility analysis.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
Acid phosphatases (AcPs) are known to provide phosphate to tissues that have high energy requirements, especially during development, growth and maturation. During spermatogenesis AcP activity is manifested in heterophagous lysosomes of Sertoli cells. This phagocytic function appears to be hormone-independent. We examined the expression pattern of AcP during the reproductive period of four species belonging to different vertebrate groups: Tilapia rendalli (Teleostei, Cichlidae), Dendropsophus minutus (Amphibia, Anura), Meriones unguiculatus (Mammalia, Rodentia), and Oryctolagus cuniculus (Mammalia, Lagomorpha). To demonstrate AcP activity, cryosections were processed for enzyme histochemistry by a modification of the method of Gömöri. AcP activity was similar in the testes of these four species. Testes of T. rendalli, D. minutus and M. unguiculatus showed an intense reaction in the Sertoli cell region. AcP activity was detected in the testes of D. minutus and O. cuniculus in seminiferous epithelium regions, where cells are found in more advanced stages of development. The seminiferous epithelium of all four species exhibited AcP activity, mainly in the cytoplasm of either Sertoli cells or germ cells. These findings reinforce the importance of AcP activity during the spermatogenesis process in vertebrates. © FUNPEC-RP.
Resumo:
O gênero Corythopis pertence à família Rhynchocyclidae e agrupa vários táxons sobre cujos limites e validade ainda deixam dúvidas, o que gera incertezas quanto a real quantidade de unidades evolutivas diagnosticáveis dentro do grupo. Esse gênero possui duas espécies reconhecidas: Corythopis delalandi, monotípica e distribuída nos biomas Mata Atlântica e Cerrado; e C. torquatus (endêmica da Amazônia), na qual são reconhecidas três formas, caracterizadas e distinguíveis uma das outras pelo padrão de tons de marrom na cabeça: C. t. torquatus Tschudi, 1844; C. t. anthoides (Pucheran, 1855) e C. t. sarayacuensis Chubb, 1918. O objetivo deste estudo é tentar reconstruir os contextos temporais e espaciais do processo de diversificação das diferentes linhagens evolutivas de Corythopis, possibilitando inferências sobre a história evolutiva e limites inter e intraespecíficos do grupo. Foram realizadas análises filogeográficas (ML e IB) e populacionais com base em um marcador mitocondrial (ND2), além de uma árvore de espécies com dois marcadores nucleares (MUSK e βf5) e um mitocondrial (ND2). De acordo com os resultados observados, existem cinco filogrupos principais em Corythopis, endêmicos das seguintes regiões (áreas de endemismo): 1- Xingu, Tapajós e Rondônia (norte; a leste do rio Jiparaná); 2- Napo; 3- Guiana; 4- Inambari e Rondônia (sul, a oeste do rio Jiparaná) e 5- Mata Atlântica. Os resultados das análises filogenéticas e populacionais indicaram a existência de dois clados reciprocamente monofiléticos sustentados por altos apoios de bootstrap (>80%) e probabilidades posteriores (> 0.95), concordando desse modo com a taxonomia atual para o gênero Corythopis, que reconhece uma espécie biológica na Amazônia (C. torquatus) e outra na Mata Atlântica e Cerrado. A árvore de espécie concorda com as demais análises, mostrando que existem apenas duas linhagens reciprocamente monofiléticas em Corythopis bem apoiadas estatisticamente: C. torquatus (F1, F2, F3 e F4) e C. delalandi (F5), reforçando o seu status como espécies biológicas independentes. O padrão biogeográfico de separação entre os diferentes filogrupos Amazônicos de Corythopis é bem diferente daquele reportado até hoje para diferentes linhagens de aves Amazônicas, onde os eventos iniciais de separação envolveram populações dos escudos brasileiros e das Guianas.
Resumo:
Traceability is a concept that arose from the need for monitoring of production processes, this concept is usually used in sectors related to food production or activities involving some kind of direct risk to people. Agribusiness in the cotton industry does not have a comprehensive infrastructure for all stages of the processes involved in production. Map and define the data to enable traceability of products is synonymous to delegate responsibilities for all involved in the production, the collection of aggregate data on cotton production is done in stages and specific pre-defined since the choice of the variety through the processing, the scope of this article specifically addresses the production of lint cotton. The paper presents a proposal based on service oriented architecture (SOA) for data integration processes in the cotton industry, this proposal provide support for the implementation of platform independent solutions.
Resumo:
[ES] El presente proyecto final de carrera surge con la idea de cubrir una necesidad social a un servicio no existente en muchos contextos. "I-Found" es un concepto de oficina virtual de objetos perdidos, con el añadido de que no solo gestiona objetos, sino también personas y animales (lo que en este proyecto se denomina un "OPA"). Además ofrece la posibilidad de poner en contacto directo y sin intermediarios a personas que han perdido o les ha sido robado un "OPA", con personas que los haya encontrado y viceversa, dentro de cualquier ámbito posible. Los objetivos principales del proyecto son : el aprendizaje de la plataforma Android para dispositivos móviles y el desarrollo de una aplicación, basada en el concepto "I-Found", totalmente funcional, experimental, libre y gratuita, restringida al ámbito geográfico e institucional de la Universidad de Las Palmas de Gran Canaria. Por este motivo, la app desarrollada se denomina "I-Found@ULPGC", leído en inglés como "I-Found at ULPGC". Esta app demuestra la capacidad de personalización del concepto "I-Found" a múltiples contextos institucionales o geográficos.
Resumo:
The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.
Resumo:
In questo elaborato prenderemo in esame la questione della progettazione di un sistema software atto a gestire alcuni dei problemi legati alla raccolta dei dati in ambito medico. Da tempo infatti si è capita l'importanza di una speciale tecnica di raccolta dei dati clinici, nota in letteratura col nome di "patient-reported outcome", che prevede che siano i pazienti stessi a fornire le informazioni circa l'andamento di una cura, di un test clinico o, più semplicemente, informazioni sul loro stato di salute fisica o mentale. Vedremo in questa trattazione come ciò sia possibile e, soprattutto, come le tecniche e le tecnologie informatiche possano dare un grande contributo ai problemi di questo ambito. Mostreremo non solo come sia conveniente l'uso, in campo clinico, di tecniche automatiche di raccolta dei dati, della loro manipolazione, aggregazione e condivisione, ma anche come sia possibile realizzare un sistema moderno che risolva tutti questi problemi attraverso l'utilizzo di tecnologie esistenti, tecniche di modellazione dei dati strutturati e un approccio che, mediante un processo di generalizzazione, aiuti a semplificare lo sviluppo del software stesso.
Towards model driven software development for Arduino platforms: a DSL and automatic code generation
Resumo:
La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.