947 resultados para System software
Resumo:
Aquest document és la memòria de com s'ha realitzat el programari Sistema Gestor de Vacances utilitzant la tecnologia Java, concretament, l'estàndard JAVA EE6. Es tracta d'una aplicació web per gestionar les vacances dels treballadors d'una empresa composta per una seu central i varies sucursals distribuïdes per tot Catalunya.
Resumo:
[ANGLÈS] This project introduces GNSS-SDR, an open source Global Navigation Satellite System software-defined receiver. The lack of reconfigurability of current commercial-of-the-shelf receivers and the advent of new radionavigation signals and systems make software receivers an appealing approach to design new architectures and signal processing algorithms. With the aim of exploring the full potential of this forthcoming scenario with a plurality of new signal structures and frequency bands available for positioning, this paper describes the software architecture design and provides details about its implementation, targeting a multiband, multisystem GNSS receiver. The result is a testbed for GNSS signal processing that allows any kind of customization, including interchangeability of signal sources, signal processing algorithms, interoperability with other systems, output formats, and the offering of interfaces to all the intermediate signals, parameters and variables. The source code release under the GNU General Public License (GPL) secures practical usability, inspection, and continuous improvement by the research community, allowing the discussion based on tangible code and the analysis of results obtained with real signals. The source code is complemented by a development ecosystem, consisting of a website (http://gnss-sdr.org), as well as a revision control system, instructions for users and developers, and communication tools. The project shows in detail the design of the initial blocks of the Signal Processing Plane of the receiver: signal conditioner, the acquisition block and the receiver channel, the project also extends the functionality of the acquisition and tracking modules of the GNSS-SDR receiver to track the new Galileo E1 signals available. Each section provides a theoretical analysis, implementation details of each block and subsequent testing to confirm the calculations with both synthetically generated signals and with real signals from satellites in space.
Resumo:
A statewide study was performed to develop regional regression equations for estimating selected annual exceedance- probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedanceprobability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized leastsquares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized leastsquares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97.9 percent for flood region 2, and 92.4 to 96.0 percent for flood region 3. The regression equations are applicable only to stream sites in Iowa with flows not significantly affected by regulation, diversion, channelization, backwater, or urbanization and with basin characteristics within the range of those used to develop the equations. These regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the eight selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided by the Web-based tool. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these eight selected statistics are provided for the streamgage.
Resumo:
Työn tarkoituksena oli selvittää, miten näkymien hallintaa voidaanhelpottaa. Näkymien hallintaa lähestyttiin sekä organisaatiolle tärkeän tiedon hallinnan että konfiguraationhallintajärjestelmä ClearCasen ylläpidon kannalta. Työssä käytettiin menetelminä kirjallisuustutkimusta, mallinnusta ja konstruktiivista menetelmää. Työn alussa tutustuttiin ohjelmistojen konfiguraationhallintaan yleisesti ja työtilan hallintaan liittyviin termeihin. Työnaikana mallinnettiin ClearCasen dynaamisten näkymien hallintaprosessi ja sen pohjalta tehtiin näkymien hallintaa helpottava sovellus. Työssä kuvattiin sovelluksen muokkautuminen mallista sovellukseksi ja tarkasteltiin, miten sovelluksesta hyödytään käytännössä. Lopuksi pohdittiin näkymien hallinnan tulevaisuutta ja miten näkymien hallintaa voitaisiin jatkossa kehittää. Työn tuloksena syntyi näkymien hallintaan tarkoitettu tekstipohjainen sovellus, joka helpottaa näkymien hallintaan liittyviä ylläpitotoimia ja vuorovaikutusta ClearCasen käyttäjien kanssa. Työn aikana muodostui myös ajatuksia siitä, kuinka näkymien hallintaavoitaisiin jatkossa kehittää.
Resumo:
The objective of this paper was to evaluate the potential of neural networks (NN) as an alternative method to the basic epidemiological approach to describe epidemics of coffee rust. The NN was developed from the intensities of coffee (Coffea arabica) rust along with the climatic variables collected in Lavras-MG between 13 February 1998 and 20 April 2001. The NN was built with climatic variables that were either selected in a stepwise regression analysis or by the Braincel® system, software for NN building. Fifty-nine networks and 26 regression models were tested. The best models were selected based on small values of the mean square deviation (MSD) and of the mean prediction error (MPE). For the regression models, the highest coefficients of determination (R²) were used. The best model developed with neural networks had an MSD of 4.36 and an MPE of 2.43%. This model used the variables of minimum temperature, production, relative humidity of the air, and irradiance 30 days before the evaluation of disease. The best regression model was developed from 29 selected climatic variables in the network. The summary statistics for this model were: MPE=6.58%, MSE=4.36, and R²=0.80. The elaborated neural networks from a time series also were evaluated to describe the epidemic. The incidence of coffee rust at four previous fortnights resulted in a model with MPE=4.72% and an MSD=3.95.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
La visualización 3D ofrece una serie de ventajas y funcionalidades cada vez más demandadas, por lo que es conveniente su incorporación a las aplicaciones GIS. El sistema propuesto integra la vista 2D propia de un GIS y la vista 3D garantizando la interacción entre ellas y teniendo por resultado una solución GIS integral. Se permite la carga de Modelos Digitales del Terreno (MDT), de forma directa o empleando servicios OGC-CSW, para la proyección de los elementos 2D, así como la carga de modelos 3D. Además el sistema está dotado de herramientas para extrusión y generación automática de volúmenes empleando parámetros existentes en la información 2D. La generación de las construcciones a partir de su altura y la elaboración de redes tridimensionales a partir de la profundidad en las infraestructuras son algunos casos prácticos de interés. Igualmente se permite no sólo la consulta y visualización sino también la edición 3D, lo que supone una importante ventaja frente a otros sistemas 3D. LocalGIS, Sistema de Información Territorial de software libre aplicado a la gestión municipal, es el sistema GIS empleado para la incorporación del prototipo. Se permite por lo tanto aplicar todas las ventajas y funcionalidades propias del 3D a la gestión municipal que LocalGIS realiza. Esta tecnología ofrece un campo de aplicaciones muy amplio y prometedor
Resumo:
For users of climate services, the ability to quickly determine the datasets that best fit one's needs would be invaluable. The volume, variety and complexity of climate data makes this judgment difficult. The ambition of CHARMe ("Characterization of metadata to enable high-quality climate services") is to give a wider interdisciplinary community access to a range of supporting information, such as journal articles, technical reports or feedback on previous applications of the data. The capture and discovery of this "commentary" information, often created by data users rather than data providers, and currently not linked to the data themselves, has not been significantly addressed previously. CHARMe applies the principles of Linked Data and open web standards to associate, record, search and publish user-derived annotations in a way that can be read both by users and automated systems. Tools have been developed within the CHARMe project that enable annotation capability for data delivery systems already in wide use for discovering climate data. In addition, the project has developed advanced tools for exploring data and commentary in innovative ways, including an interactive data explorer and comparator ("CHARMe Maps") and a tool for correlating climate time series with external "significant events" (e.g. instrument failures or large volcanic eruptions) that affect the data quality. Although the project focuses on climate science, the concepts are general and could be applied to other fields. All CHARMe system software is open-source, released under a liberal licence, permitting future projects to re-use the source code as they wish.
Resumo:
Esse trabalho tem por objetivo o desenvolvimento de um sistema inteligente para detecção da queima no processo de retificação tangencial plana através da utilização de uma rede neural perceptron multi camadas, treinada para generalizar o processo e, conseqüentemente, obter o limiar de queima. em geral, a ocorrência da queima no processo de retificação pode ser detectada pelos parâmetros DPO e FKS. Porém esses parâmetros não são eficientes nas condições de usinagem usadas nesse trabalho. Os sinais de emissão acústica e potência elétrica do motor de acionamento do rebolo são variáveis de entrada e a variável de saída é a ocorrência da queima. No trabalho experimental, foram empregados um tipo de aço (ABNT 1045 temperado) e um tipo de rebolo denominado TARGA, modelo ART 3TG80.3 NVHB.
Resumo:
The aim of this work is to provide a text to support interested in the main systems of amortization of the current market: Constant Amortization System (SAC) and French System, also known as Table Price. We will use spreadsheets to facilitate calculations involving handling exponential and decimal. Based on [12], we show that the parcels of the SAC become smaller than the French system after a certain period. Further then that, we did a comparison to show that the total amount paid by SAC is less than the French System
Resumo:
Pós-graduação em Ciências da Motricidade - IBRC
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
O objetivo deste trabalho foi avaliar a variabilidade genética de populações de avestruzes (Struthio camelus) por meio de marcadores RAPD (Polimorfismos de DNA amplificado ao acaso). Foram coletadas 121 amostras de indivíduos procedentes dos estados do Pará, Maranhão, Tocantins e Minas Gerais. O DNA genômico foi extraído a partir de sangue total. A triagem de iniciadores permitiu a seleção de 15 entre os 60 analisados que foram amplificados pelo método PCR. Os produtos da PCR foram visualizados em gel de agarose 1,5% e foi gerada uma matriz binária para os fragmentos amplificados com presença (1) e ausência (0) de banda. Para a verificação do número ótimo de bandas polimórficas foi realizada a análise de bootstrap por meio do software GQMOL. Para a análise dos dados foi utilizado o programa NTSYS-pc (Numerical Taxonomy and Multivariate Analysis Sistem), versão 2.02. A similaridade entre as amostras foi analisada através do coeficiente de Jaccard. Foi gerada uma matriz de distância cofenética, usando o módulo Sahncof, do programa NTSYS 2.2. Para realização da estruturação gênica o procedimento empregado foi a análise de variância molecular (AMOVA), processada no software Arlequin 2.0 (Schneider et al., 2000). Os 15 iniciadores geraram um total de 109 bandas polimórficas. A análise de bootstrap mostrou que a partir de 100 bandas o trabalho já se torna mais confiável, uma vez que a magnitude da correlação foi bem próxima do valor máximo (r=0,99), como também a soma de quadrados dos desvios (SQd) atingiu valor baixo 1,25 e o valor do estresse (E) foi de 0,05. Na análise entre pares de grupos, foi verificado que a maior e menor similaridade estão em torno, respectivamente, de 0,86 e 0,00. No que diz respeito à distribuição de freqüência das similaridades obtidas entre os 5.644 pares formados na matriz genética, pode-se verificar que 32,69 % dos pares ficaram incluídos nas classes com similaridades variando de 0,01 a 0,10. Nota-se que a maior porcentagem (85,59%) dos pares ficou distribuídos nas três primeiras classes das extremidades e que a minoria deles (14,41%) apresentou similaridades variando de 0,21 a 1,00. O teste de Mantel mostrou correlação de 0,81 e o dendrograma gerou 67 grupos delimitados pela Sm que foi de 0,49. A maior similaridade foi de 0,86 e a menor de 0,06. Os dados relativos à análise de variância molecular mostraram que a porcentagem de variação genética entre procedências foi baixa e significativa (24,03%, p < 0,0001), evidenciando que grande parte da variação encontra-se dentro das populações (75,97 %). Os marcadores RAPD foram eficientes na caracterização da similaridade genética.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)