960 resultados para Data processing service centers -- TFC


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu del TFC és crear una 'suite' que resolgui tota la línia de producció d'un podcast. És a dir: captura d'un senyal d'audio en directe, transcodificació, classificació,emmagatzematge i, per acabar, difusió per Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Disseny i implementació d¿una aplicació client d¿escriptori dedicada a la gestió de notes, aquest parteix dels requeriments exposats a partir de l¿exemple TreeNotes anunciat a l¿enunciat del TFC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest projecte tracta de recopilar tots els conceptes, criteris i característiques que es poden aplicar a un projecte amb l'ànim de que els lliuraments de projectes siguin fidels i es corresponguin amb la pràctica i realitat del servei o producte que es desitgi desenvolupar. Per garantir l'èxit ens centrarem en el disseny, planificació i visualització dels projectes fent ús de les WBS (Work Breakdown Structure) o EDT (Estructura del treball). Aquesta tècnica permet a l'equip de projecte exposar esquemàticament quin és el seu abast i socialitzar-les entre els diversos actors o persones involucrades de manera clara i esquematitzada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projecte per a la Gestió del Departament d'Informàtica en la web de l'Institut d'Educació Secundària Eduardo Merello de Port de Sagunt. L'objectiu últim del projecte és gestionar tota la informació relacionada amb el departament d'Informàtica, desenvolupant un eina senzilla que, integrada en la web del Centre, resulti còmoda, pràctica i útl, tant per als professors com per als alumnes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Only multifaceted hospital wide interventions have been successful in achieving sustained improvements in hand hygiene (HH) compliance. METHODOLOGY/PRINCIPAL FINDINGS Pre-post intervention study of HH performance at baseline (October 2007-December 2009) and during intervention, which included two phases. Phase 1 (2010) included multimodal WHO approach. Phase 2 (2011) added Continuous Quality Improvement (CQI) tools and was based on: a) Increase of alcohol hand rub (AHR) solution placement (from 0.57 dispensers/bed to 1.56); b) Increase in frequency of audits (three days every three weeks: "3/3 strategy"); c) Implementation of a standardized register form of HH corrective actions; d) Statistical Process Control (SPC) as time series analysis methodology through appropriate control charts. During the intervention period we performed 819 scheduled direct observation audits which provided data from 11,714 HH opportunities. The most remarkable findings were: a) significant improvements in HH compliance with respect to baseline (25% mean increase); b) sustained high level (82%) of HH compliance during intervention; c) significant increase in AHRs consumption over time; c) significant decrease in the rate of healthcare-acquired MRSA; d) small but significant improvements in HH compliance when comparing phase 2 to phase 1 [79.5% (95% CI: 78.2-80.7) vs 84.6% (95% CI:83.8-85.4), p<0.05]; e) successful use of control charts to identify significant negative and positive deviations (special causes) related to the HH compliance process over time ("positive": 90.1% as highest HH compliance coinciding with the "World hygiene day"; and "negative":73.7% as lowest HH compliance coinciding with a statutory lay-off proceeding). CONCLUSIONS/SIGNIFICANCE CQI tools may be a key addition to WHO strategy to maintain a good HH performance over time. In addition, SPC has shown to be a powerful methodology to detect special causes in HH performance (positive and negative) and to help establishing adequate feedback to healthcare workers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To assess the prevalence and predictors of service disengagement in a treated epidemiological cohort of first-episode psychosis (FEP) patients. METHODS: The Early Psychosis Prevention and Intervention Centre (EPPIC) in Australia admitted 786 FEP patients from January 1998 to December 2000. Treatment at EPPIC is scheduled for 18 months. Data were collected from patients' files using a standardized questionnaire. Seven hundred four files were available; 44 were excluded, because of a non-psychotic diagnosis at endpoint (n=43) or missing data on service disengagement (n=1). Rate of service disengagement was the outcome of interest, as well as pre-treatment, baseline, and treatment predictors of service disengagement, which were examined via Cox proportional hazards models. RESULTS: 154 patients (23.3%) disengaged from service. A past forensic history (Hazard ratio [HR]=1.69; 95%CI 1.17-2.45), lower severity of illness at baseline (HR=0.59; 95%CI 0.48-0.72), living without family at discharge (HR=1.75; 95%CI 1.22-2.50) and persistence of substance use disorder during treatment (HR=2.30; 95%CI 1.45-3.66) were significant predictors of disengagement from service. CONCLUSIONS: While engagement strategies are a core element in the treatment of first-episode psychosis, particular attention should be paid to these factors associated with disengagement. Involvement of the family in the treatment process, and focusing on reduction of substance use, need to be pursued in early intervention services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desenvolupament,anàlisi, disseny i implementació d’un portal web, tipus cercador, per la recerca i localització d’establiments tipus restaurant, bars, bufets, etc,. Per altra banda es pretén oferir un sistema de gestió d’informació on les empreses del sector gastronòmic puguin mostrar la seva informació i la seva oferta

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente Trabajo Final de Carrera (TFC) está centrado en la Gestión de un Proyecto de Implantación de un Repositorio de Objetos Digitales de Aprendizaje en una Universidad, y queda englobado en el área de Gestión de Proyectos de la Ingeniería Técnica Informática de Gestión.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desarrollo detallado de la fase de aprobación de un proyecto informático mediante el desarrollo de tecnologías ágiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sparsely spaced highly permeable fractures of the granitic rock aquifer at Stang-er-Brune (Brittany, France) form a well-connected fracture network of high permeability but unknown geometry. Previous work based on optical and acoustic logging together with single-hole and cross-hole flowmeter data acquired in 3 neighbouring boreholes (70-100 m deep) has identified the most important permeable fractures crossing the boreholes and their hydraulic connections. To constrain possible flow paths by estimating the geometries of known and previously unknown fractures, we have acquired, processed and interpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz antennas. The GPR data processing scheme consisting of timezero corrections, scaling, bandpass filtering and F-X deconvolution, eigenvector filtering, muting, pre-stack Kirchhoff depth migration and stacking was used to differentiate fluid-filled fracture reflections from source generated noise. The final stacked and pre-stack depth-migrated GPR sections provide high-resolution images of individual fractures (dipping 30-90°) in the surroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas) of each borehole in a 2D plane projection that are of superior quality to those obtained from single-offset sections. Most fractures previously identified from hydraulic testing can be correlated to reflections in the single-hole data. Several previously unknown major near vertical fractures have also been identified away from the boreholes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DnaSP is a software package for a comprehensive analysis of DNA polymorphism data. Version 5 implements a number of new features and analytical methods allowing extensive DNA polymorphism analyses on large datasets. Among other features, the newly implemented methods allow for: (i) analyses on multiple data files; (ii) haplotype phasing; (iii) analyses on insertion/deletion polymorphism data; (iv) visualizing sliding window results integrated with available genome annotations in the UCSC browser.