808 resultados para new web based frameworks
Resumo:
Las administracionespúblicas de los países avanzadosestán llevando a caboiniciativas para gestionar lainformación y la comunicaciónde riesgo y emergencias mediantesitios web concebidos ydiseñados para ello. Estos sitiosestán pensados para facilitarinformación a los ciudadanosen caso de emergencias, perotambién contienen informaciónútil para los expertos y las autoridades.En este trabajo, y ala luz de la legislación españolasobre emergencias, se comparanlos sitios de la administraciónautonómica catalana y delgobierno de España con lossitios de tres países de referencia:Estados Unidos, Francia yReino Unido. Al mismo tiempose propone una metodologíasimple para llevar a cabo unacomparación que permita extraerconclusiones y plantearrecomendaciones en un aspectode la gestión de la información que puede resultar clave para salvar bienes materiales y vidas humanas.
Resumo:
ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a 'decentralized' way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across 'selected' resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPASy.
Resumo:
This final year project presents the design principles and prototype implementation of BIMS (Biomedical Information Management System), a flexible software system which provides an infrastructure to manage all information required by biomedical research projects.The BIMS project was initiated with the motivation to solve several limitations in medical data acquisition of some research projects, in which Universitat Pompeu Fabra takes part. These limitations,based on the lack of control mechanisms to constraint information submitted by clinicians, impact on the data quality, decreasing it.BIMS can easily be adapted to manage information of a wide variety of clinical studies, not being limited to a given clinical specialty. The software can manage both, textual information, like clinical data (measurements, demographics, diagnostics, etc ...), as well as several kinds of medical images (magnetic resonance imaging, computed tomography, etc ...). Moreover, BIMS provides a web - based graphical user interface and is designed to be deployed in a distributed andmultiuser environment. It is built on top of open source software products and frameworks.Specifically, BIMS has been used to represent all clinical data being currently used within the CardioLab platform (an ongoing project managed by Universitat Pompeu Fabra), demonstratingthat it is a solid software system, which could fulfill requirements of a real production environment.
Resumo:
The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.
Resumo:
The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources.
Resumo:
Work Internship Placements (WIP) is a new and transversal enterprise internships programme, which is focused on quality improvement, academic control and satisfaction of collaborating enterprises. This programme is addressed to the engineering students of the PolytechnicSchool at the University of Girona (UdG) in Spain. The fundamental WIP infrastructure combines a web-based intranet platform, that provides a complete set of WIP tools, with a protocol of procedures and tasks that are observed and followed at all internship stages by every participating agent, i.e. enterprises, students, coaching professors and administrative staff. Our new programme is centered on a broader, more holistic internship placement procedure than the traditional “career and academic goals” approach. The WIP programme has been found to be a valuable asset in addressing enterprise and student needs in the experiential project
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
The GO annotation dataset provided by the UniProt Consortium (GOA: http://www.ebi.ac.uk/GOA) is a comprehensive set of evidenced-based associations between terms from the Gene Ontology resource and UniProtKB proteins. Currently supplying over 100 million annotations to 11 million proteins in more than 360,000 taxa, this resource has increased 2-fold over the last 2 years and has benefited from a wealth of checks to improve annotation correctness and consistency as well as now supplying a greater information content enabled by GO Consortium annotation format developments. Detailed, manual GO annotations obtained from the curation of peer-reviewed papers are directly contributed by all UniProt curators and supplemented with manual and electronic annotations from 36 model organism and domain-focused scientific resources. The inclusion of high-quality, automatic annotation predictions ensures the UniProt GO annotation dataset supplies functional information to a wide range of proteins, including those from poorly characterized, non-model organism species. UniProt GO annotations are freely available in a range of formats accessible by both file downloads and web-based views. In addition, the introduction of a new, normalized file format in 2010 has made for easier handling of the complete UniProt-GOA data set.
Resumo:
OBJECTIVE: To describe a method to obtain a profile of the duration and intensity (speed) of walking periods over 24 hours in women under free-living conditions. DESIGN: A new method based on accelerometry was designed for analyzing walking activity. In order to take into account inter-individual variability of acceleration, an individual calibration process was used. Different experiments were performed to highlight the variability of acceleration vs walking speed relationship, to analyze the speed prediction accuracy of the method, and to test the assessment of walking distance and duration over 24-h. SUBJECTS: Twenty-eight women were studied (mean+/-s.d.) age: 39.3+/-8.9 y; body mass: 79.7+/-11.1 kg; body height: 162.9+/-5.4 cm; and body mass index (BMI) 30.0+/-3.8 kg/m(2). RESULTS: Accelerometer output was significantly correlated with speed during treadmill walking (r=0.95, P<0.01), and short unconstrained walks (r=0.86, P<0.01), although with a large inter-individual variation of the regression parameters. By using individual calibration, it was possible to predict walking speed on a standard urban circuit (predicted vs measured r=0.93, P<0.01, s.e.e.=0.51 km/h). In the free-living experiment, women spent on average 79.9+/-36.0 (range: 31.7-168.2) min/day in displacement activities, from which discontinuous short walking activities represented about 2/3 and continuous ones 1/3. Total walking distance averaged 2.1+/-1.2 (range: 0.4-4.7) km/day. It was performed at an average speed of 5.0+/-0.5 (range: 4.1-6.0) km/h. CONCLUSION: An accelerometer measuring the anteroposterior acceleration of the body can estimate walking speed together with the pattern, intensity and duration of daily walking activity.
Resumo:
The function of DNA-binding proteins is controlled not just by their abundance, but mainly at the level of their activity in terms of their interactions with DNA and protein targets. Moreover, the affinity of such transcription factors to their target sequences is often controlled by co-factors and/or modifications that are not easily assessed from biological samples. Here, we describe a scalable method for monitoring protein-DNA interactions on a microarray surface. This approach was designed to determine the DNA-binding activity of proteins in crude cell extracts, complementing conventional expression profiling arrays. Enzymatic labeling of DNA enables direct normalization of the protein binding to the microarray, allowing the estimation of relative binding affinities. Using DNA sequences covering a range of affinities, we show that the new microarray-based method yields binding strength estimates similar to low-throughput gel mobility-shift assays. The microarray is also of high sensitivity, as it allows the detection of a rare DNA-binding protein from breast cancer cells, the human tumor suppressor AP-2. This approach thus mediates precise and robust assessment of the activity of DNA-binding proteins and takes present DNA-binding assays to a high throughput level.
Resumo:
This is the second edition of the compendium. Since the first edition a number of important initiatives have been launched in the shape of large projects targeting integration of research infrastructure and new technology for toxicity studies and exposure monitoring.The demand for research in the area of human health and environmental safety management of nanotechnologies is present since a decade and identified by several landmark reports and studies. Several guidance documents have been published. It is not the intention of this compendium to report on these as they are widely available.It is also not the intention to publish scientific papers and research results as this task is covered by scientific conferences and the peer reviewed press.The intention of the compendium is to bring together researchers, create synergy in their work, and establish links and communication between them mainly during the actual research phase before publication of results. Towards this purpose we find useful to give emphasis to communication of projects strategic aims, extensive coverage of specific work objectives and of methods used in research, strengthening human capacities and laboratories infrastructure, supporting collaboration for common goals and joint elaboration of future plans, without compromising scientific publication potential or IP Rights.These targets are far from being achieved with the publication in its present shape. We shall continue working, though, and hope with the assistance of the research community to make significant progress. The publication will take the shape of a dynamic, frequently updated, web-based document available free of charge to all interested parties. Researchers in this domain are invited to join the effort, communicating the work being done. [Auteurs]
Resumo:
The main goal of CleanEx is to provide access to public gene expression data via unique gene names. A second objective is to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and cross-data set comparisons. A consistent and up-to-date gene nomenclature is achieved by associating each single experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of human genes and genes from model organisms. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing cross-references to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resource, such as cDNA clones or Affymetrix probe sets. The web-based query interfaces offer access to individual entries via text string searches or quantitative expression criteria. CleanEx is accessible at: http://www.cleanex.isb-sib.ch/.
Resumo:
The Learning Affect Monitor (LAM) is a new computer-based assessment system integrating basic dimensional evaluation and discrete description of affective states in daily life, based on an autonomous adapting system. Subjects evaluate their affective states according to a tridimensional space (valence and activation circumplex as well as global intensity) and then qualify it using up to 30 adjective descriptors chosen from a list. The system gradually adapts to the user, enabling the affect descriptors it presents to be increasingly relevant. An initial study with 51 subjects, using a 1 week time-sampling with 8 to 10 randomized signals per day, produced n = 2,813 records with good reliability measures (e.g., response rate of 88.8%, mean split-half reliability of .86), user acceptance, and usability. Multilevel analyses show circadian and hebdomadal patterns, and significant individual and situational variance components of the basic dimension evaluations. Validity analyses indicate sound assignment of qualitative affect descriptors in the bidimensional semantic space according to the circumplex model of basic affect dimensions. The LAM assessment module can be implemented on different platforms (palm, desk, mobile phone) and provides very rapid and meaningful data collection, preserving complex and interindividually comparable information in the domain of emotion and well-being.
Resumo:
L'article és una reflexió sobre els requisits de formació dels professionals que demana la societat del coneixement. Un dels objectius més importants que ha de tenir la universitat en la societat del coneixement és la formació de professionals competents que tinguin prou eines intel·lectuals per a enfrontar-se a la incertesa de la informació, a la consciència que aquesta té una data de caducitat a curt termini i a l'ansietat que això provoca. Però, a més, també han de ser capaços de definir i crear les eines de treball amb què donaran sentit i eficàcia a aquest coneixement mudable i mutant. Per això, l'espai europeu d'ensenyament superior prioritza la competència transversal del treball col·laboratiu amb l'objectiu de promoure un aprenentatge autònom, compromès i adaptat a les noves necessitats de l'empresa del segle xxi. En aquest context, es presenta l'entorn teòric que fonamenta el treball desenvolupat a la plataforma informàtica ACME, que uneix el treball col·laboratiu i l'aprenentatge semipresencial o blended learning. Així mateix, es descriuen amb detall alguns exemples de wikis, paradigma del treball col·laboratiu, fets en assignatures impartides per la Universitat de Girona en l'espai virtual ACME
Resumo:
A statewide study was performed to develop regional regression equations for estimating selected annual exceedance- probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedanceprobability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized leastsquares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized leastsquares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97.9 percent for flood region 2, and 92.4 to 96.0 percent for flood region 3. The regression equations are applicable only to stream sites in Iowa with flows not significantly affected by regulation, diversion, channelization, backwater, or urbanization and with basin characteristics within the range of those used to develop the equations. These regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the eight selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided by the Web-based tool. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these eight selected statistics are provided for the streamgage.