998 resultados para data backup
Resumo:
Tietokantoja käyttävien tietojärjestelmien kriittisyys tietoyhteiskunnan eri osille ja toiminnalle on merkittävä. Tietojenkäsittelyn jatkuvuus ja tietojärjestelmien korkea käytettävyys on pyrittävä turvaamaan mahdollisimman kattavasti joka hetkellä ja vikatilanteista on kyettävä toipumaan työskentelyn ja liiketoiminnan jatkamiseksi. Työn tarkoituksena oli selvittää erilaisia menetelmiä näiden tietokantojen jatkuvaan tiedonvarmistukseen sekä paikallisilla palvelinjärjestelmillä että tietoverkon välityksellä ylläpidettävillä varajärjestelmillä. Paikallisella hyvin suunnitellulla tiedonvarmistuksella vikaantunut tietokanta ja sen tietosisältö kyetään palauttamaan mihinkä tahansa ajanhetkeen ennen vikaantumista. Varajärjestelmät puolestaan voidaan ottaa välittömästi käyttöön kokonaisen konesalin käytön estyessä tai vikaantuessa. Lisäksi useammat konesalit ratkaisusta riippuen voivat palvella käyttäjiään samanaikaisesti tasaten tietojärjestelmän kuormaa, tarjoten lisämahdollisuuksia tietojenkäsittelyyn ja niiden avulla sama tieto voidaan tuoda lähemmäksi palvelemaan käyttäjiään. Työn mielenkiinto kohdistuu lähinnä Oracle-tietokantoja käyttävien tieto-järjestelmien tarjoamiin varmistusvaihtoehtoihin. Kyseiset tietokantajärjestelmät ovat laajassa käytössä niin yritysmaailmassa kuin julkisellakin sektorilla.
Resumo:
Varmuuskopiointi ja tietoturva suomalaisessa mikroyrityksessä ovat asioita, joihin ei usein kiinnitetä riittävää huomiota puuttuvan osaamisen, kiireen tai liian vähäisten resurssien takia. Tietoturva on valittu erääksi työn tutkimusaiheeksi, koska se on ajankohtainen ja paljon puhuttu aihe. Toiseksi tutkimusaiheeksi on valittu varmuuskopiointi, sillä se liittyy hyvin vahvasti tietoturvaan ja se on pakollinen toimenpide yrityksen liiketoiminnan jatkuvuuden takaamiseksi. Tässä työssä tutkitaan mikroyrityksen tietoturvaa ja pohditaan, miten sitä voidaan parantaa yksinkertaisilla menetelmillä. Tämän lisäksi tarkastellaan mikroyrityksen varmuuskopiointia ja siihen liittyviä asioita ja ongelmia. Työn tavoitteena on tietoturvan ja varmuuskopioinnin tutkiminen yleisellä tasolla sekä useamman varmuuskopiointiratkaisuvaihtoehdon luominen kirjallisuuden ja teorian pohjalta. Työssä tarkastellaan yrityksen tietoturvaa ja varmuuskopiointia käyttäen hyväksi kuvitteellista malliyritystä tutkimusympäristönä, koska tällä tavalla tutkimusympäristö voidaan määritellä ja rajata tarkasti. Koska kyseiset aihealueet ovat varsin laajoja, on työn aihetta rajattu lähinnä varmuuskopiointiin, mahdollisiin tietoturvauhkiin ja tietoturvan tutkimiseen yleisellä tasolla. Tutkimuksen pohjalta on kehitetty kaksi mahdollista paikallisen varmuuskopioinnin ratkaisuvaihtoehtoa ja yksi etävarmuuskopiointiratkaisuvaihtoehto. Paikallisen varmuuskopioinnin ratkaisuvaihtoehdot ovat varmuuskopiointi ulkoiselle kovalevylle ja varmuuskopiointi NAS (Network Attached Storage) -verkkolevypalvelimelle. Etävarmuuskopiointiratkaisuvaihtoehto on varmuuskopiointi etäpalvelimelle, kuten pilvipalveluun. Vaikka NAS-verkkolevypalvelin on paikallisen varmuuskopioinnin ratkaisu, voidaan sitä myös käyttää etävarmuuskopiointiin riippuen laitteen sijainnista. Työssä vertaillaan ja arvioidaan lyhyesti ratkaisuvaihtoehtoja tutkimuksen pohjalta luoduilla arviointikriteereillä. Samalla esitellään pisteytysmalli ratkaisujen arvioinnin ja sopivan ratkaisuvaihtoehdon valitsemisen helpottamiseksi. Jokaisessa ratkaisuvaihtoehdossa on omat hyvät ja huonot puolensa, joten oikean ratkaisuvaihtoehdon valitseminen ei ole aina helppoa. Ratkaisuvaihtoehtojen sopivuus tietylle yritykselle riippuu aina yrityksen omista tarpeista ja vaatimuksista. Koska eri yrityksillä on usein erilaiset vaatimukset ja tarpeet varmuuskopioinnille, voi yritykselle parhaiten sopivan varmuuskopiointiratkaisun löytäminen olla vaikeaa ja aikaa vievää. Tässä työssä esitetyt ratkaisuvaihtoehdot toimivat ohjeena ja perustana mikroyrityksen varmuuskopioinnin suunnittelussa, valinnassa, päätöksen teossa ja järjestelmän rakentamisessa.
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
El correu electrònic ha esdevingut una eina molt important per a la societat i crítica per a molts negocis. Amb la proliferació dels proveïdors de correu gratuït a la web, hom pot accedir fàcilment al correu des de qualsevol lloc, fins i tot seguir accedint-hi amb les eines de sempre. Així, moltes empreses i particulars han deixat de tenir serveis de correu propis i han passat a utilitzar el que ofereixen d'altres proveïdors. Però què passa amb les còpies de seguretat del correu? Quins proveïdors ofereixen recuperar correus esborrats? En quines condicions? Durant quant de temps guarden les còpies de seguretat? Què passa si el proveïdor deixa d'oferir el servei? Amb backimap hom pot recuperar el control de les còpies de seguretat del correu i romandre una mica més tranquil.
Resumo:
A share of information on why & how to make copies of data so that these additional copies may be used to restore the original after a data loss event. Burn cd's and copy to memory sticks.
Resumo:
This article presents the data-rich findings of an experiment with enlisting patron-driven/demand-driven acquisitions (DDA) of ebooks in two ways. The first experiment entailed comparison of DDA eBook usage against newly ordered hardcopy materials’ circulation, both overall and ebook vs. print usage within the same subject areas. Secondly, this study experimented with DDA ebooks as a backup plan for unfunded requests left over at the end of the fiscal year.
Resumo:
This article presents the data-rich findings of an experiment with enlisting patron-driven/demand-driven acquisitions (DDA) of ebooks in two ways. The first experiment entailed comparison of DDA eBook usage against newly ordered hardcopy materials’ circulation, both overall and ebook vs. print usage within the same subject areas. Secondly, this study experimented with DDA ebooks as a backup plan for unfunded requests left over at the end of the fiscal year.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.
Resumo:
Adult male and female emperor penguins (Aptenodytes forsteri) were fitted with satellite transmitters at Pointe-Géologie (Adélie Land), Dumont d'Urville Sea coast, in November 2005. Nine of 30 data sets were selected for analyses to investigate the penguins' diving behaviour at high resolution (doi:10.1594/PANGAEA.633708, doi:10.1594/PANGAEA.633709, doi:10.1594/PANGAEA.633710, doi:10.1594/PANGAEA.633711). The profiles are in synchrony with foraging trips of the birds during austral spring (doi:10.1594/PANGAEA.472171, doi:10.1594/PANGAEA.472173, doi:10.1594/PANGAEA.472164, doi:10.1594/PANGAEA.472160, doi:10.1594/PANGAEA.472161). Corresponding high resolution winter data (n = 5; archived elsewhere) were provided by A. Ancel, Centre d'Ecologie et Physiologie Energétiques, CNRS, Strasbourg, France. Air-breathing divers tend to increase their overall dive duration with increasing dive depth. In most penguin species, this occurs due to increasing transit (descent and ascent) durations but also because the duration of the bottom phase of the dive increases with increasing depth. We interpreted the efficiency with which emperor penguins can exploit different diving depths by analysing dive depth profile data of nine birds studied during the early and late chick-rearing period in Adélie Land, Antarctica. Another eight datasets of dive depth and duration frequency recordings (doi:10.1594/PANGAEA.472150, doi:10.1594/PANGAEA.472152, doi:10.1594/PANGAEA.472154, doi:10.1594/PANGAEA.472155, doi:10.1594/PANGAEA.472142, doi:10.1594/PANGAEA.472144, doi:10.1594/PANGAEA.472146, doi:10.1594/PANGAEA.472147), which backup the analysed high resolution depth profile data, and dive depth and duration frequency recordings of another bird (doi:10.1594/PANGAEA.472156, doi:10.1594/PANGAEA.472148) did not match the requirement of high resolution for analyses. Eleven additional data sets provide information on the overall foraging distribution of emperor penguins during the period analysed (doi:10.1594/PANGAEA.472157, doi:10.1594/PANGAEA.472158, doi:10.1594/PANGAEA.472162, doi:10.1594/PANGAEA.472163, doi:10.1594/PANGAEA.472166, doi:10.1594/PANGAEA.472167, doi:10.1594/PANGAEA.472168, doi:10.1594/PANGAEA.472170, doi:10.1594/PANGAEA.472172, doi:10.1594/PANGAEA.472174, doi:10.1594/PANGAEA.472175).
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
The article seeks to investigate patterns of performance and relationships between grip strength, gait speed and self-rated health, and investigate the relationships between them, considering the variables of gender, age and family income. This was conducted in a probabilistic sample of community-dwelling elderly aged 65 and over, members of a population study on frailty. A total of 689 elderly people without cognitive deficit suggestive of dementia underwent tests of gait speed and grip strength. Comparisons between groups were based on low, medium and high speed and strength. Self-related health was assessed using a 5-point scale. The males and the younger elderly individuals scored significantly higher on grip strength and gait speed than the female and oldest did; the richest scored higher than the poorest on grip strength and gait speed; females and men aged over 80 had weaker grip strength and lower gait speed; slow gait speed and low income arose as risk factors for a worse health evaluation. Lower muscular strength affects the self-rated assessment of health because it results in a reduction in functional capacity, especially in the presence of poverty and a lack of compensatory factors.
Resumo:
Obstructive sleep apnea syndrome has a high prevalence among adults. Cephalometric variables can be a valuable method for evaluating patients with this syndrome. To correlate cephalometric data with the apnea-hypopnea sleep index. We performed a retrospective and cross-sectional study that analyzed the cephalometric data of patients followed in the Sleep Disorders Outpatient Clinic of the Discipline of Otorhinolaryngology of a university hospital, from June 2007 to May 2012. Ninety-six patients were included, 45 men, and 51 women, with a mean age of 50.3 years. A total of 11 patients had snoring, 20 had mild apnea, 26 had moderate apnea, and 39 had severe apnea. The distance from the hyoid bone to the mandibular plane was the only variable that showed a statistically significant correlation with the apnea-hypopnea index. Cephalometric variables are useful tools for the understanding of obstructive sleep apnea syndrome. The distance from the hyoid bone to the mandibular plane showed a statistically significant correlation with the apnea-hypopnea index.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
To assess the completeness and reliability of the Information System on Live Births (Sinasc) data. A cross-sectional analysis of the reliability and completeness of Sinasc's data was performed using a sample of Live Birth Certificate (LBC) from 2009, related to births from Campinas, Southeast Brazil. For data analysis, hospitals were grouped according to category of service (Unified National Health System, private or both), 600 LBCs were randomly selected and the data were collected in LBC-copies through mothers and newborns' hospital records and by telephone interviews. The completeness of LBCs was evaluated, calculating the percentage of blank fields, and the LBCs agreement comparing the originals with the copies was evaluated by Kappa and intraclass correlation coefficients. The percentage of completeness of LBCs ranged from 99.8%-100%. For the most items, the agreement was excellent. However, the agreement was acceptable for marital status, maternal education and newborn infants' race/color, low for prenatal visits and presence of birth defects, and very low for the number of deceased children. The results showed that the municipality Sinasc is reliable for most of the studied variables. Investments in training of the professionals are suggested in an attempt to improve system capacity to support planning and implementation of health activities for the benefit of maternal and child population.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.