959 resultados para Web access characterization
Resumo:
Canopy characterization is a key factor to improve pesticide application methods in tree crops and vineyards. Development of quick, easy and efficient methods to determine the fundamental parameters used to characterize canopy structure is thus an important need. In this research the use of ultrasonic and LIDAR sensors have been compared with the traditional manual and destructive canopy measurement procedure. For both methods the values of key parameters such as crop height, crop width, crop volume or leaf area have been compared. Obtained results indicate that an ultrasonic sensor is an appropriate tool to determine the average canopy characteristics, while a LIDAR sensor provides more accuracy and detailed information about the canopy. Good correlations have been obtained between crop volume (CVU) values measured with ultrasonic sensors and leaf area index, LAI (R2 = 0.51). A good correlation has also been obtained between the canopy volume measured with ultrasonic and LIDAR sensors (R2 = 0.52). Laser measurements of crop height (CHL) allow one to accurately predict the canopy volume. The proposed new technologies seems very appropriate as complementary tools to improve the efficiency of pesticide applications, although further improvements are still needed.
Resumo:
Background: Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results: In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions: The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.
Resumo:
Cereal cyst nematode (CCN, Heterodera avenae) and Hessian fly (HF, Mayetiola destructor) are two major pests affecting wheat crops worldwide including important cereal areas of Spain. Aegilops ventricosa and Ae. triuncialis were used as donors in a strategy to introduce resistance genes (RG) for these pests in hexaploid wheat (Triticum aestivum L.). Two 42 chromosomes introgression lines have been derived from Ae. ventricosa: H-93-8 and H-93-33 carrying genes Cre2 and H27 conferring resistance to CCN and HF, respectively. Line TR-3531 with 42 chromosomes has been derived from Ae. triuncialis and carries RGs conferring resistance for CCN (Cre7) and for HF (H30). Alien material has been incorporated in lines H-93 by chromosomal substitution and recombination, while in line TR-3531 homoeologous recombination affecting small DNA fragments has played a major role. It has been demonstrated that Cre2, Cre7, H27 and H30 are major single dominant genes and not allelic of other previously described RGs. Biochemical and molecular-biology studies of the defense mechanism triggered by Cre2 and Cre7 have revealed specific induction of peroxidase and other antioxidant enzymes. In parallel to these basic studies advanced lines carrying resistance genes for CNN and/or HF have been developed. Selection was done using molecular markers for eventually «pyramiding» resistance genes. Several isozyme and RAPD markers have been described and, currently, new markers based on transposable elements and NBS-LRR sequences are being developed. At present, two advanced lines have already been included at the Spanish Catalogue of Commercial Plant Varieties.
Resumo:
Los rankings de productividad científica resultan cada vez más relevantes, tanto a nivel individual como institucional. Garantizar que se basan en información confiable y exhaustiva es, por tanto, importante. Este estudio indica que la posición de los individuos en esa clase de ranking puede cambiar sustancialmente cuando se consideran diversos indicadores bibliométricos internacionalmente reconocidos. Se usa, como ilustración, el caso de los diez profesores del área de ‘Personalidad, Evaluación y Tratamiento Psicológico’ consignados en el reciente análisis de Olivas-Ávila y Musi-Lechuga (Psicothema 2010. Vol. 22, nº 4, pp. 909-916).
Resumo:
El concepto web 2.0 se utiliza para denominar a un conjunto de aplicaciones que están siempre en evolución de acuerdo a los requerimientos que los usuarios van realizando. En la red podemos encontrar muchas herramientas desarrolladas en la línea de la web 2.0: blogs, wikis, herramientas para compartir marcadores, para compartir archivos, etc. Consideramos que el sistema educativo no puede estar al margen de esta evolución tecnológica y necesita adaptarse a todos los niveles. Las universidades también se encuentran en la necesidad de adecuarse a estos nuevos tiempos, y cada vez encontramos más experiencias formativas de trabajo colaborativo en red para favorecer el aprendizaje de los estudiantes. El trabajo que presentamos es un análisis de herramientas web 2.0 y de una recopilación de buenas prácticas docentes universitarias de desarrollo de metodologías colaborativas utilizando las TIC. Además, ofrecemos recomendaciones del uso de estas herramientas en los procesos de enseñanza y aprendizaje universitario.
Resumo:
This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10 000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.
Resumo:
This paper presents the current state and development of a prototype web-GIS (Geographic Information System) decision support platform intended for application in natural hazards and risk management, mainly for floods and landslides. This web platform uses open-source geospatial software and technologies, particularly the Boundless (formerly OpenGeo) framework and its client side software development kit (SDK). The main purpose of the platform is to assist the experts and stakeholders in the decision-making process for evaluation and selection of different risk management strategies through an interactive participation approach, integrating web-GIS interface with decision support tool based on a compromise programming approach. The access rights and functionality of the platform are varied depending on the roles and responsibilities of stakeholders in managing the risk. The application of the prototype platform is demonstrated based on an example case study site: Malborghetto Valbruna municipality of North-Eastern Italy where flash floods and landslides are frequent with major events having occurred in 2003. The preliminary feedback collected from the stakeholders in the region is discussed to understand the perspectives of stakeholders on the proposed prototype platform.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Instructor and student beliefs, attitudes and intentions toward contributing to local open courseware (OCW) sites have been investigated through campus-wide surveys at Universidad Politecnica de Valencia and the University of Michigan. In addition, at the University of Michigan, faculty have been queried about their participation in open access (OA) publishing. We compare the instructor and student data concerning OCW between the two institutions, and introduce the investigation of open access publishing in relation to open courseware publishing.
Resumo:
E-learning, understood as the intensive use of Information and Communication Technologies in mainly but not only) distance education, has radically changed the meaning of the latter. E-learning is an overused term which has been applied to any use of technology in education. Today, the most widely accepted meaning ofe-learning coincides with the fourth generation described by Taylor (1999), where there is an asynchronousprocess that allows students and teachers to interact in an educational process expressly designed in accordance with these principles. We prefer to speak of Internet-Based Learning or, better still, Web-Based Learning, for example, to explain the fact that distance education is carried out using the Internet, with the appearance of the virtual learning environment concept, a web space where the teaching and learning process is generated and supported (Sangrà, 2002). This entails overcoming the barriers of space and time of brickand mortar education (although we prefer the term face-to-face) or of classical distance education using broadcasting and adopting a completely asynchronous model that allows access to education by many more users, at any level (including secondary education, but primarily higher education and lifelong learning).
Resumo:
Postprint (published version)
Resumo:
Drug discovery is a continuous process where researchers are constantly trying to find new and better drugs for the treatment of various conditions. Alzheimer’s disease, a neurodegenerative disease mostly affecting the elderly, has a complex etiology with several possible drug targets. Some of these targets have been known for years while other new targets and theories have emerged more recently. Cholinesterase inhibitors are the major class of drugs currently used for the symptomatic treatment of Alzheimer’s disease. In the Alzheimer’s disease brain there is a deficit of acetylcholine and an impairment in signal transmission. Acetylcholinesterase has therefore been the main target as this is the main enzyme hydrolysing acetylcholine and ending neurotransmission. It is believed that by inhibiting acetylcholinesterase the cholinergic signalling can be enhanced and the cognitive symptoms that arise in Alzheimer’s disease can be improved. Butyrylcholinesterase, the second enzyme of the cholinesterase family, has more recently attracted interest among researchers. Its function is still not fully known, but it is believed to play a role in several diseases, one of them being Alzheimer’s disease. In this contribution the aim has primarily been to identify butyrylcholinesterase inhibitors to be used as drug molecules or molecular probes in the future. Both synthetic and natural compounds in diverse and targeted screening libraries have been used for this purpose. The active compounds have been further characterized regarding their potencies, cytotoxicity, and furthermore, in two of the publications, the inhibitors ability to also inhibit Aβ aggregation in an attempt to discover bifunctional compounds. Further, in silico methods were used to evaluate the binding position of the active compounds with the enzyme targets. Mostly to differentiate between the selectivity towards acetylcholinesterase and butyrylcholinesterase, but also to assess the structural features required for enzyme inhibition. We also evaluated the compounds, active and non-active, in chemical space using the web-based tool ChemGPS-NP to try and determine the relevant chemical space occupied by cholinesterase inhibitors. In this study, we have succeeded in finding potent butyrylcholinesterase inhibitors with a diverse set of structures, nine chemical classes in total. In addition, some of the compounds are bifunctional as they also inhibit Aβ aggregation. The data gathered from all publications regarding the chemical space occupied by butyrylcholinesterase inhibitors we believe will give an insight into the chemically active space occupied by this type of inhibitors and will hopefully facilitate future screening and result in an even deeper knowledge of butyrylcholinesterase inhibitors.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This exploratory, descriptive action research study is based on a survey of a sample of convenience consisting of 172 college and university marketing students, and 5 professors who were experienced in teaching in an internet based environment. The students that were surveyed were studying e-commerce and international business in 3^^ and 4*'' year classes at a leading imiversity in Ontario and e-commerce in 5^ semester classes at a leading college. These classes were taught using a hybrid teaching style with the contribution of a large website that contained pertinent text and audio material. Hybrid teaching employs web based course materials (some in the form of Learning Objects) to deliver curriculimi material both during the attended lectures and also for students accessing the course web page outside of class hours. The survey was in the form on an online questionnaire. The research questions explored in this study were: 1. What factors influence the students' ability to access and learn from web based course content? 2. How likely are the students to use selected elements of internet based curriculum for learning academic content? 3. What is the preferred physical environment to facilitate learning in a hybrid environment? 4. How effective are selected teaching/learning strategies in a hybrid environment? The findings of this study suggest that students are very interested in being part of the learning process by contributing to a course web site. Specifically, students are interested in audio content being one of the formats of online course material, and have an interest in being part of the creation of small audio clips to be used in class.