951 resultados para Data Storage


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Of the approximately 25,000 bridges in Iowa, 28% are classified as structurally deficient, functionally obsolete, or both. Because many Iowa bridges require repair or replacement with a relatively limited funding base, there is a need to develop new bridge materials that may lead to longer life spans and reduced life-cycle costs. In addition, new and effective methods for determining the condition of structures are needed to identify when the useful life has expired or other maintenance is needed. Due to its unique alloy blend, high-performance steel (HPS) has been shown to have improved weldability, weathering capabilities, and fracture toughness than conventional structural steels. Since the development of HPS in the mid-1990s, numerous bridges using HPS girders have been constructed, and many have been economically built. The East 12th Street Bridge, which replaced a deteriorated box girder bridge, is Iowa’s first bridge constructed using HPS girders. The new structure is a two-span bridge that crosses I-235 in Des Moines, Iowa, providing one lane of traffic in each direction. A remote, continuous, fiber-optic based structural health monitoring (SHM) system for the bridge was developed using off-the-shelf technologies. In the system, sensors strategically located on the bridge collect raw strain data and then transfer the data via wireless communication to a gateway system at a nearby secure facility. The data are integrated and converted to text files before being uploaded automatically to a website that provides live strain data and a live video stream. A data storage/processing system at the Bridge Engineering Center in Ames, Iowa, permanently stores and processes the data files. Several processes are performed to check the overall system’s operation, eliminate temperature effects from the complete strain record, compute the global behavior of the bridge, and count strain cycles at the various sensor locations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Resum En l’actualitat, els sistemes electrònics de processament de dades són cada cop més significatius dins del sector industrial. Són moltes les necessitats que sorgeixen en el món dels sistemes d’autentificació, de l’electrònica aeronàutica, d’equips d’emmagatzemament de dades, de telecomunicacions, etc. Aquestes necessitats tecnològiques exigeixen ser controlades per un sistema fiable, robust, totalment dependent amb els esdeveniments externs i que compleixi correctament les restriccions temporals imposades per tal de que realitzi el seu propòsit d’una manera eficient. Aquí és on entren en joc els sistemes encastats en temps real, els quals ofereixen una gran fiabilitat, disponibilitat, una ràpida resposta als esdeveniments externs del sistema, una alta garantia de funcionament i una àmplia possibilitat d’aplicacions. Aquest projecte està pensat per a fer una introducció al món dels sistemes encastats, com també explicar el funcionament del sistema operatiu en temps real FreeRTOS; el qual utilitza com a mètode de programació l’ús de tasques independents entre elles. Donarem una visió de les seves característiques de funcionament, com organitza tasques mitjançant un scheduler i uns exemples per a poder dissenyar-hi aplicacions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this project was to investigate the potential for collecting and using data from mobile terrestrial laser scanning (MTLS) technology that would reduce the need for traditional survey methods for the development of highway improvement projects at the Iowa Department of Transportation (Iowa DOT). The primary interest in investigating mobile scanning technology is to minimize the exposure of field surveyors to dangerous high volume traffic situations. Issues investigated were cost, timeframe, accuracy, contracting specifications, data capture extents, data extraction capabilities and data storage issues associated with mobile scanning. The project area selected for evaluation was the I-35/IA 92 interchange in Warren County, Iowa. This project covers approximately one mile of I-35, one mile of IA 92, 4 interchange ramps, and bridges within these limits. Delivered LAS and image files for this project totaled almost 31GB. There is nearly a 6-fold increase in the size of the scan data after post-processing. Camera data, when enabled, produced approximately 900MB of imagery data per mile using a 2- camera, 5 megapixel system. A comparison was done between 1823 points on the pavement that were surveyed by Iowa DOT staff using a total station and the same points generated through the MTLS process. The data acquired through the MTLS and data processing met the Iowa DOT specifications for engineering survey. A list of benefits and challenges is included in the detailed report. With the success of this project, it is anticipate[d] that additional projects will be scanned for the Iowa DOT for use in the development of highway improvement projects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este trabajo se describe una base de conocimiento de las ALU humanas. La ontología incorpora términos SO y GO y está orientada a describir el contexto genómico del conjunto de ALU. Para cada elemento ALU se almacenan el gen y transcrito más cercanos, así como su anotación funcional de acuerdo a GO, el estado de la cromatina circundante y los factores de transcripción presentes en la ALU. Se han incorporado reglas semánticas para facilitar el almacenamiento, consulta e integración de la información. La ontología de ALU es plenamente analizable mediante razonadores como Pellet y está parcialmente transferida a una wiki semántica.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tämän työn tavoitteena on suunnitella yksinkertainen Internet-pohjainen mielipidekyselyjärjestelmä sekä esitellä yksityiskohtaisesti järjestelmän toteutus ja siihen liittyvät menetelmät. Menetelmistäesitellään ainoastaan ennalta valitut menetelmät järjestelmän toteutukseen, tietojen esittämiseen, esitystavan muotoiluun sekä tietojen varastointiin. Järjestelmä toteutetaan HTML- ja PHP-kielillä sekä käyttämällä CSS-menetelmän tyylimuotoiluja ja XML-kielen muotoiluun perustuvia tiedostoja tietovarastoina. Järjestelmän suunnitteluun liittyen työssä pyritään kuvaamaan järjestelmään toteutettavat kaksi erillistä käyttöliittymää, pääkäyttäjän käyttöliittymä ja normaalin käyttäjän käyttöliittymä, sekä näihin toteutettavat toiminnot. Pääkäyttäjän tärkeimmät toiminnot ovat mielipidekyselyiden luominen, käyttäjien lisääminen kyselyihin sekä kyselyiden tulosten seuranta. Normaalin käyttäjän toiminnot taas rajoittuvat kirjautumiseen ja kyselyyn vastaamiseen. Järjestelmän toteutuksen kuvauksessa kuvataan tarkasti edellä mainittujen kahden käyttöliittymän toiminnot sekä näiden toimintojen toteutustavat. Lisäksi toteutuksen kuvauksen yhteydessä määritellään tarkasti järjestelmän tietovarastoina toimivien tiedostojen sisällön muoto. Työn lopputuloksena syntyi valituilla toteutustavoilla toteutettu toimiva mielipidekyselyjärjestelmä sekä tämä järjestelmän suunnitteluun ja toteutuksen selvittämiseen keskittynyt dokumentti. Toteutetusta järjestelmästä ei tullut täydellinen vaan jatkokehityksessä voidaan harkita esimerkiksi tietokannan käyttämistä järjestelmän tietovarastoina sekä joidenkin lisäominaisuuksien toteuttamista. Tavoitteeseen päästiin kuitenkin, sillä toteutettu järjestelmä on toimiva ja käyttötarkoitukseensa sopiva.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have studied the abrupt and hysteretic changes of resistance in MgO-based capacitor devices. The switching behavior is discussed in terms of the formation and rupture of conduction filaments due to the migration of structural defects in the electric field, together with the redox events which affects the mobile carriers. The results presented in this paper suggest that MgO transparent films combining ferromagnetism and multilevel switching characteristics might pave the way for a new method for spintronic multibit data storage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Projektin johtaminen informaatioaikakaudella merkitsee, yhä useammassa tapauksessa, virtuaalisen verkoston tietämyksen hallitsemista ja johtamista. Kolmannen sukupolven matkaviestinverkon rakentaminen on virtuaaliverkostoa hyväkseen käyttävä projekti, joka koostuu lukuisista osapuolista.Matkaviestinverkkoprojektien laajentuminen ja kilpailun kiristyminen alalla ovat nostaneet knowledge management -järjestelmien kysyntää. Kysyntä kohdistuu voimakkaimmin järjestelmiin, joiden avulla matkaviestinverkkotoimitusprojektin kaikki toiminnot pystytään käsittelemään yhdessä järjestelmässä. Tämän tutkimuksen pääasiallisena tavoitteena on ollut selvittää keskeisimmät vaatimukset kolmannen sukupolven matkaviestinverkon rakentamisprojektin knowledge management ?järjestelmälle. Tutkimus toteutettiin kirjallisuuden ja avoimien teemahaastattelujen avulla, käyttäen kvalitatiivisiä tutkimusmenetelmiä.Haastattelut osoittivat, että tärkeimmät vaatimukset kolmannen sukupolven matkaviestinverkkoja rakentavan projektiorganisaation knowledge management -järjestelmälle ovat: 1. Knowledge management -järjestelmän tulee toimia keskitettynä tietovarastona.2. Projektissa käsiteltävään tietoon on pystyttävä porautumaan vähintään prosessi- ja paikkatietoon pohjautuvista näkökulmista. 3. Pääsyn knowledge management -järjestelmään tulee olla mahdollista kaikille projektin osapuolille.4. Pääsyn knowledge management -järjestelmään tulee olla mahdollista mistä tahansa ja mihin kellonaikaan tahansa.5. Knowledge management -järjestelmän käyttöliittymän tulee olla mahdollisimman yksinkertainen. Järjestelmän tulee olla yhteydessä organisaatioiden muihin järjestelmiin.