966 resultados para C (Programming Language)
Resumo:
Des de fa uns anys, des de lEPS de la UVic, sest duent a terme el desenvolupament dun dispositiu electrnic que proporciona la capacitat de capturar dades sobre un niu docells. El projecte e-niu, que es pot seguir a www.tutara.info/e-niu, est actualment en una fase de test., i sha desenvolupat sobretot la part hardware. Aquest projecte t com a objectiu principal crear un entorn web per poder gestionar les dades que sobtenen del niu docells informatitzat (e-niu). Les dades que ens arriben dels e-nius estan en un arxiu de text, i el que es pretn s que lusuari que controla el niu, pugui fer diversos anlisis de les dades daquest. A ms de poder veure els resultats en diversos tipus de grfics, tamb se li vol donar la opci de treure els resultats en format taula o en format Excel, un format aquest ltim, molt interessant, ja que donaria a les dades una gran possibilitat de ser tractades posteriorment, com fer seleccions del tipus de dades, treure percentatges, crear altres tipus de grfics, etc. Laltre gran objectiu s el fet de poder treballar en la creaci dun entorn web complert a nivell gaireb professional amb laprenentatge que aix comporta, ja que sha aplicat la tecnologia client-servidor, s a dir, que el llenguatge de programaci est dins el servidor, i quan algun usuari lexecuta, el sistema noms li envia la presentaci en HTML. El sistema de programaci que es fa servir s el de les tres capes. La capa de dades, que est formada per una base de dades relacional del tipus MySQL i s on emmagatzemarem tota la informaci. La capa de programaci de la que sencarrega el llenguatge PHP, s on sefectua tot el tractament de les dades i finalment, la capa de presentaci, que s la que sencarrega de mostrar les dades al client en el navegador mitjanant els templates de HTML.
Resumo:
Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.
Resumo:
This study proposes an activity to introduce scientific programming. In particular, the multidisciplinary concepts of scientific programming, quantum mechanics, and spectroscopy are presented in the study of the electronic spectrum of the I2 molecule. We use Python programming language and the IPython command shell, in particular, for their user friendliness and versatility.
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.
Warning system based on theoretical-experimental study of dispersion of soluble pollutants in rivers
Resumo:
Information about capacity of transport and dispersion of soluble pollutants in natural streams are important in the management of water resources, especially in planning preventive measures to minimize the problems caused by accidental or intentional waste, in public health and economic activities that depend on the use of water. Considering this importance, this study aimed to develop a warning system for rivers, based on experimental techniques using tracers and analytical equations of one-dimensional transport of soluble pollutants conservative, to subsidizing the decision-making in the management of water resources. The system was development in JAVA programming language and MySQL database can predict the travel time of pollutants clouds from a point of eviction and graphically displays the temporal distribution of concentrations of passage clouds, in a particular location, downstream from the point of its launch.
Resumo:
It is presented a software developed with Delphi programming language to compute the reservoir's annual regulated active storage, based on the sequent-peak algorithm. Mathematical models used for that purpose generally require extended hydrological series. Usually, the analysis of those series is performed with spreadsheets or graphical representations. Based on that, it was developed a software for calculation of reservoir active capacity. An example calculation is shown by 30-years (from 1977 to 2009) monthly mean flow historical data, from Corrente River, located at So Francisco River Basin, Brazil. As an additional tool, an interface was developed to manage water resources, helping to manipulate data and to point out information that it would be of interest to the user. Moreover, with that interface irrigation districts where water consumption is higher can be analyzed as a function of specific seasonal water demands situations. From a practical application, it is possible to conclude that the program provides the calculation originally proposed. It was designed to keep information organized and retrievable at any time, and to show simulation on seasonal water demands throughout the year, contributing with the elements of study concerning reservoir projects. This program, with its functionality, is an important tool for decision making in the water resources management.
Resumo:
Sugarcane has a significant role on Brazilian agribusiness economy. The harvesting cane is considered as one of the most important operations of the process for it has to attend the raw material demanded by the sugar mill in quality and a competitive cost. The objective of this work it is it of analyzing, of systemic way, the variables influence on economical and operational performance in sugarcane mechanized harvesting process for sizing of machines. For this purpose a model called "ColheCana", was developed in a spreadsheet and in a programming language. The results showed that the field efficiency and harvesters initial value are variables of great impact in the cost and that there is a maximum area that one equipment can attend and for this area the cost is minimum.
Resumo:
Pumping systems account for over 20 % of all electricity consumption in European industry. Optimization and correct design of such systems is important and there is a reasonable amount of unrealized energy saving potential in old pumping systems. The energy efficiency and therefore also the energy consumption of a pumping system heavily depends on the correct dimensioning and selection of devices. In this work, a graphical optimization tool for pumping systems is developed in Matlab programming language. The tool selects optimal pump, electrical motor and frequency converter for existing pumping process and calculates the life cycle costs of the whole system. The tool could be used as an aid when choosing the machinery and to analyze the energy consumption of existing systems. Results given by the tool are compared to the results of laboratory tests. The selection of pump and motor works reasonably well, but the frequency converter selection still needs development
Resumo:
The objective of this thesis was to examine the potential of multi-axis solutions in packaging machines produced in Europe. The definition of a multi-axis solution in this study is a construction that uses a common DC bus power supply for different amplifiers running the axes and the intelligence is centralized into one unit. The cost structure of a packaging machine was gained from an automation research, which divided the machines according to automation categories. The automation categories were then further divided into different sub-components by evaluating the ratio of multi-axis solutions compared to other automation components in packaging machines. A global motion control study was used for further information. With the help of the ratio, an estimation of the potential of multi-axis solutions in each country and packaging machine sector was completed. In addition to the research, a specific questionnaire was sent to five companies to gain information about the present situation and possible trends in packaging machinery. The greatest potential markets are in Germany and Italy, which are also the largest producers of packaging machinery in Europe. The greatest growth in the next few years will be seen in Turkey where the annual growth rate equals the general machinery production rate in Asia. The greatest market potential of the Nordic countries is found in Sweden in 35th position on the list. According to the interviews, motion control products in packaging machines will retain their current power levels, as well as the number of axes in the future. Integrated machine safety features together with a universal programming language are the desired attributes of the future. Unlike generally in industry, the energy saving objectives are and will remain insignificant in the packaging industry.
Resumo:
Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.
Resumo:
Tss tyss esiteltiin Android laitteisto- ja sovellusalustana sek kuvattiin, kuinka Android-pelisovelluksen kyttliittym voidaan pit yhtenisen eri nyttlaitteilla skaalauskertoimien ja ankkuroinnin avulla. Toisena osiona tyt ksiteltiin yksinkertaisia tapoja, joilla pelisovelluksien suorituskyky voidaan parantaa. Nist tarkempiin mittauksiin valittiin matalatarkkuuksinen piirtopuskuri ja nkymttmiss olevien kappaleiden piilotus. Mittauksissa valitut menetelmt vaikuttivat demosovelluksen suorituskykyyn huomattavasti. Tss tyss rajauduttiin Android-ohjelmointiin Java-kielell ilman ulkoisia kirjastoja, jolloin tyn tuloksia voi helposti hydynt mahdollisimman monessa eri kyttkohteessa.
Resumo:
This thesis reports investigations on applying the Service Oriented Architecture (SOA) approach in the engineering of multi-platform and multi-devices user interfaces. This study has three goals: (1) analyze the present frameworks for developing multi-platform and multi-devices applications, (2) extend the principles of SOA for implementing a multi-platform and multi-devices architectural framework (SOA-MDUI), (3) applying and validating the proposed framework in the context of a specific application. One of the problems addressed in this ongoing research is the large amount of combinations for possible implementations of applications on different types of devices. Usually it is necessary to take into account the operating system (OS), user interface (UI) including the appearance, programming language (PL) and architectural style (AS). Our proposed approach extended the principles of SOA using patterns-oriented design and model-driven engineering approaches. Synthesizing the present work done in these domains, this research built and tested an engineering framework linking Model-driven Architecture (MDA) and SOA approaches to developing of UI. This study advances general understanding of engineering, deploying and managing multi-platform and multi-devices user interfaces as a service.
Resumo:
The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.
Resumo:
This thesis introduces an extension of Chomskys context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways by language equations and by logical deduction, where a grammar is understood as a logic for the recursive denition of syntax. The motivation for grammars with contexts comes from an extensive example that completely denes the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it denes. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar dene regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling