939 resultados para Web resources
Resumo:
This paper reports the design of an input-triggered polymorphic ASIC for H.264 baseline decoder. Hardware polymorphism is achieved by selectively reusing hardware resources at system and module level. Complete design is done using ESL design tools following a methodology that maintains consistency in testing and verification throughout the design flow. The proposed design can support frame sizes from QCIF to 1080p.
Resumo:
Medicinal and aromatic plants (MAPs) are an integral part of our biodiversity. In majority of MAP rich countries, wild collection practices are the livelihood options for a large number of rural peoples and MAPs play a significant role in socio-economic development of their communities. Recent concern over the alarming situation of the status of wild MAP resources, raw material quality, as well as social exploitation of rural communities, leads to the idea of certification for MAP resource conservation and management. On one hand, while MAP certification addresses environmental, social and economic perspectives of MAP resources, on the other hand, it ensures multi-stakeholder participation in improvement of the MAP sector. This paper presents an overview of MAP certification encompassing its different parameters, current scenario (Indian background), implementation strategies as well as stakeholders’ role in MAP conservation. It also highlights Indian initiatives in this direction.
Resumo:
The identification of sequence (amino acids or nucleotides) motifs in a particular order in biological sequences has proved to be of interest. This paper describes a computing server, SSMBS, which can locate anddisplay the occurrences of user-defined biologically important sequence motifs (a maximum of five) present in a specific order in protein and nucleotide sequences. While the server can efficiently locate motifs specified using regular expressions, it can also find occurrences of long and complex motifs. The computation is carried out by an algorithm developed using the concepts of quantifiers in regular expressions. The web server is available to users around the clock at http://dicsoft1.physics.iisc.ernet.in/ssmbs/.
Resumo:
Business processes and application functionality are becoming available as internal web services inside enterprise boundaries as well as becoming available as commercial web services from enterprise solution vendors and web services marketplaces. Typically there are multiple web service providers offering services capable of fulfilling a particular functionality, although with different Quality of Service (QoS). Dynamic creation of business processes requires composing an appropriate set of web services that best suit the current need. This paper presents a novel combinatorial auction approach to QoS aware dynamic web services composition. Such an approach would enable not only stand-alone web services but also composite web services to be a part of a business process. The combinatorial auction leads to an integer programming formulation for the web services composition problem. An important feature of the model is the incorporation of service level agreements. We describe a software tool QWESC for QoS-aware web services composition based on the proposed approach.
Resumo:
The role of lectins in mediating cancer metastasis, apoptosis as well as various other signaling events has been well established in the past few years. Data on various aspects of the role of lectins in cancer is being accumulated at a rapid pace. The data on lectins available in the literature is so diverse, that it becomes difficult and time-consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. Not only do the lectins vary significantly in their individual functional roles, but they are also diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities and specificities as well as their potential applications. An organization of these seemingly independent data into a common framework is essential in order to achieve effective use of all the data towards understanding the roles of different lectins in different aspects of cancer and any resulting applications. An integrated knowledge base (CancerLectinDB) together with appropriate analytical tools has therefore been developed for lectins relevant for any aspect of cancer, by collating and integrating diverse data. This database is unique in terms of providing sequence, structural, and functional annotations for lectins from all known sources in cancer and is expected to be a useful addition to the number of glycan related resources now available to the community. The database has been implemented using MySQL on a Linux platform and web-enabled using Perl-CGI and Java tools. Data for individual lectins pertain to taxonomic, biochemical, domain architecture, molecular sequence and structural details as well as carbohydrate specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value for various studies on lectin cancer biology.
Resumo:
Optimal allocation of water resources for various stakeholders often involves considerable complexity with several conflicting goals, which often leads to multi-objective optimization. In aid of effective decision-making to the water managers, apart from developing effective multi-objective mathematical models, there is a greater necessity of providing efficient Pareto optimal solutions to the real world problems. This study proposes a swarm-intelligence-based multi-objective technique, namely the elitist-mutated multi-objective particle swarm optimization technique (EM-MOPSO), for arriving at efficient Pareto optimal solutions to the multi-objective water resource management problems. The EM-MOPSO technique is applied to a case study of the multi-objective reservoir operation problem. The model performance is evaluated by comparing with results of a non-dominated sorting genetic algorithm (NSGA-II) model, and it is found that the EM-MOPSO method results in better performance. The developed method can be used as an effective aid for multi-objective decision-making in integrated water resource management.
Resumo:
FRDC project 2008/306 Building economic capability to improve the management of marine resources in Australia was developed and approved in response to the widespread recognition and acknowledgement of the importance of incorporating economic considerations into marine management in Australia and of the persistent undersupply of suitably trained and qualified individuals capable of providing this input. The need to address this shortfall received broad based support and following widespread stakeholder consultation and building on previous unsuccessful State-based initiatives, a collaborative, cross-jurisdictional cross-institutional capability building model was developed. The resulting project sits within the People Development Program as part of FRDC’s ‘investment in RD&E to develop the capabilities of the people to whom the industry entrusts its future’, and has addressed its objectives largely through three core activities: 1. The Fisheries Economics Graduate Research Training Program which provides research training in fisheries/marine economics through enrolment in postgraduate higher degree studies at the three participating Universities; 2. The Fisheries Economics Professional Training Program which aims to improve the economic literacy of non-economist marine sector stakeholders and was implemented in collaboration with the Seafood Cooperative Research Centre through the Future Harvest Masterclass in Fisheries Economics; and, 3. The Australian Fisheries Economics Network (FishEcon) which aims to strengthen research in the area of fisheries economics by creating a forum in which fisheries economists, fisheries managers and Ph.D. students can share research ideas and results, as well as news of upcoming research opportunities and events. These activities were undertaken by a core Project team, comprising economic researchers and teachers from each of the four participating institutions (namely the University of Tasmania, the University of Adelaide, Queensland University of Technology and the Commonwealth Scientific and Industrial Research Organisation), spanning three States and the Commonwealth. The Project team reported to and was guided by a project Steering Committee. Commensurate with the long term nature of the project objectives and some of its activities the project was extended (without additional resources) in 2012 to 30th June 2015.
Resumo:
Potable water resources are being depleted at an alarming rate worldwide. Storm water is a hugely under-utilized resource that could help as extreme weather events become more frequent...
Resumo:
Home education is on the rise in Australia. However, unlike parents who choose mainstream schooling, these parents often lack the support of a wider community to help them on their educational and parenting journey. This support is especially lacking as many people in the wider community find the choice to home education confronting. As such, these parents may feel isolated and alienated in the general population as their choice to home educate is questioned at best, and ridiculed at worst. These parents often find sanctuary online in homeschool groups on Facebook. This chapter explores the ways that Facebook Groups are used by marginalized and disenfranchised families who home educate to meet with others who are likeminded and aligned with their beliefs and philosophies. It is through these groups that parents, in relation to schooling it is especially mothers, are able to ask for advice, to vent, to explore options and find connections that may be lacking in the wider community.
Resumo:
"The Protection of Traditional Knowledge Associated with Genetic Resources: The Role of Databases and Registers" ABSTRACT Yovana Reyes Tagle The misappropriation of TK has sparked a search for national and international laws to govern the use of indigenous peoples knowledge and protection against its commercial exploitation. There is a widespread perception that biopiracy or illegal access to genetic resources and associated traditional knowledge (TK) continues despite national and regional efforts to address this concern. The purpose of this research is to address the question of how documentation of TK through databases and registers could protect TK, in light of indigenous peoples increasing demands to control their knowledge and benefit from its use. Throughout the international debate over the protection of TK, various options have been brought up and discussed. At its core, the discussion over the legal protection of TK comes down to these issues: 1) The doctrinal question: What is protection of TK? 2) The methodological question: How can protection of TK be achieved? 3) The legal question: What should be protected? And 4) The policy questions: Who has rights and how should they be implemented? What kind of rights should indigenous peoples have over their TK? What are the central concerns the TK databases want to solve? The acceptance of TK databases and registers may bring with it both opportunities and dangers. How can the rights of indigenous peoples over their documented knowledge be assured? Documentation of TK was envisaged as a means to protect TK, but there are concerns about how documented TK can be protected from misappropriation. The methodology used in this research seeks to contribute to the understanding of the protection of TK. The steps taken in this research attempt to describe and to explain a) what has been done to protect TK through databases and registers, b) how this protection is taking place, and c) why the establishment of TK databases can or cannot be useful for the protection of TK. The selected case studies (Peru and Venezuela) seek to illustrate the complexity and multidisciplinary nature of the establishment of TK databases, which entail not only legal but also political, socio-economic and cultural issues. The study offers some conclusions and recommendations that have emerged after reviewing the national experiences, international instruments, work of international organizations, and indigenous peoples perspectives. This thesis concludes that if TK is to be protected from disclosure and unauthorized use, confidential databases are required. Finally, the TK database strategy needs to be strengthened by the legal protection of the TK itself.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.