897 resultados para sicurezza, exploit, XSS, Beef, browser
Resumo:
2.4. The author may post the VoR version of the article (in PDF or HTML form) in the Institutional Repository of the institution in which the author worked at the time the article was first submitted, or (for appropriate journals) in PubMed Central or UK PubMed Central or arXiv, no sooner than one year after first publication of the article in the Journal, subject to file availability and provided the posting includes a prominent statement of the full bibliographical details, a copyright notice in the name of the copyright holder (Cambridge University Press or the sponsoring Society, as appropriate), and a link to the online edition of the Journal at Cambridge Journals Online.
Resumo:
The authors have developed the method used by Pianet and Le Hir (Doc.Sci.Cent. ORSTOM Pointe-Noire, 17, 1971) for the study of albacore (Thunnus albacares) in the Pointe-Noire region. The method is based on the fact that the ratio between unit of effort and number of fish for two fishing gears is equal to the ratio of their catchability coefficients.
Resumo:
In this study, random amplified polymorphic DNA (RAPD) analysis was used to estimate genetic diversity and relationship in 134 samples belonging to two native cattle breeds from the Yunnan province of China (DeHong cattle and DiQing cattle) and four intro
Resumo:
Beef liver catalase molecules can stick tenaciously to the highly oriented pyrolytic graphite (HOPG) surface which has been activated by electrochemical anodization. The immobilized sample is stable enough for high resolution scanning tunneling microscope (STM) imaging. When the anodized conditions are controlled properly, the HOPG surface will be covered with a very thin oxide layer which can bind the protein molecules. Individual molecules of native beef liver catalase are directly observed in detail by STM, which shows an oval-shape structure with a waist. The dimensions of one catalase molecule in this study are estimated as 9.0 x 6.0x 2.0 nm(3), which are in good agreement with the known data obtained from X-ray analysis, except the height can not be exactly determined from STM. Electrochemical results confirm that the freshly adsorbed catalase molecules maintain their native structures with biological activities. However, the partly unfolding structure of catalase molecules is observed after the sample is stored for 15 days, this may be caused by the long-term interaction between catalase molecules and the anodized HOPG surface.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.
Resumo:
In recent years, the potential to positively modulate human health through dietary approaches has received considerable attention. Bioactive peptides which are released during the hydrolysis or fermentation of food proteins or following digestion may exert beneficial physiological effects in vivo. The aim of this work was to isolate, characterise and evaluate Angiotensin-І-converting enzyme (ACE-І) inhibitory, antimicrobial and antioxidant peptides from the bovine myofibrillar proteins actin and myosin. In order to generate these peptides, the myofibrillar proteins actin and myosin were hydrolysed with digestive enzymes pepsin, trypsin and α-chymotrypsin, or with the industrial thermolysin-like enzyme “Thermoase”, Amano Inc. It was found that each hydrolysate generated contained peptides which possessed ACE inhibitory, antioxidant and antimicrobial activity. The peptides responsible in part for the observed ACE inhibitory, antioxidant and antimicrobial activity of a number of hydrolysates were isolated using the method of RP-HPLC and the bioactive peptides contained within each active fraction was determined using either MALDI-TOF MS/MS or N-terminal peptide sequencing. During the course of this thesis six ACE inhibitory and five antimicrobial peptides were identified. It was determined that the reported antioxidant activity was a direct result of a number of peptides working in synergy with each other. The IC50 values of the six ACE inhibitory peptides ranged in values of 6.85 to 75.7 µM which compare favourably to values previously reported for other food derived ACE inhibitory peptides, particularly the well known milk peptides IPP and VPP, IC50 values of 5 and 9 µM respectively. All five antimicrobial peptides identified in this thesis displayed activity against Escherichia coli, Salmonella typhimurium, Staphylococcus aureus and Listeria innocua with MIC values ranging from 0.625 to10 mM. The activity of each antimicrobial peptide was strain specific. Furthermore the role and importance of charged amino acids to the activity of antimicrobial peptides was also determined. Generally the removal of charged amino acids from the sequence of antimicrobial peptides resulted in a loss of antimicrobial activity. In conclusion, this thesis revealed that a range of bioactive peptides exhibiting ACE inhibitory, antioxidant and antimicrobial activities were encrypted in bovine myofibrillar proteins that could be released using digestive and industrial enzymes. Finally enzymatic hydrolysates of muscle proteins could potentially be incorporated into functional foods; however, the potential health benefits would need to be proven in human clinical studies.
Resumo:
The article focuses on an information system to exploit the use of metadata within film and television production. It is noted that the television and film industries are used to working on big projects. This involves the use of actual film, video tape, and P.E.R.T charts for project planning. Scripts are in most instances revised. It is essential to attach information on these in order to manage, track and retrieve them. The use of metadata eases the operations involved in these industries.
Resumo:
The television and film industries are used to working on large projects. These projects use media and documents of various types, ranging from actual film and videotape to items such as PERT charts for project planning. Some items, such as scripts, evolve over a period and go through many versions. It is often necessary to attach information to these “objects” in order to manage, track, and retrieve them. On large productions there may be hundreds of personnel who need access to this material and who in their turn generate new items which form some part of the final production. The requirements for this industry in terms of an information system may be generalized and a distributed software architecture built, primarily using the internet, to serve the needs of these projects. This architecture must enable potentially very large collections of objects to be managed in a secure environment with distributed responsibilities held by many working on the production. Copyright © 2005 by the Society of Motion Picture and Television Engineers, Inc.
Resumo:
The Continuous Plankton Recorder (CPR) survey provides a unique multi- decadal dataset on the abundance of plankton in the North Sea and North Atlantic and is one of only a few monitoring programmes operating at a large spatio- temporal scale. The results of all samples analysed from the survey since 1946 are stored on an Access Database at the Sir Alister Hardy Foundation for Ocean Science (SAHFOS) in Plymouth. The database is large, containing more than two million records (~80 million data points, if zero results are added) for more than 450 taxonomic entities. An open data policy is operated by SAHFOS. However, the data are not on-line and so access by scientists and others wishing to use the results is not interactive. Requests for data are dealt with by the Database Manager. To facilitate access to the data from the North Sea, which is an area of high research interest, a selected set of data for key phytoplankton and zooplankton species has been processed in a form that makes them readily available on CD for research and other applications. A set of MATLAB tools has been developed to provide an interpolated spatio-temporal description of plankton sampled by the CPR in the North Sea, as well as easy and fast access to users in the form of a browser. Using geostatistical techniques, plankton abundance values have been interpolated on a regular grid covering the North Sea. The grid is established on centres of 1 degree longitude x 0.5 degree latitude (~32 x 30 nautical miles). Based on a monthly temporal resolution over a fifty-year period (1948-1997), 600 distribution maps have been produced for 54 zooplankton species, and 480 distribution maps for 57 phytoplankton species over the shorter period 1958-1997. The gridded database has been developed in a user-friendly form and incorporates, as a package on a CD, a set of options for visualisation and interpretation, including the facility to plot maps for selected species by month, year, groups of months or years, long-term means or as time series and contour plots. This study constitutes the first application of an easily accessed and interactive gridded database of plankton abundance in the North Sea. As a further development the MATLAB browser is being converted to a user- friendly Windows-compatible format (WinCPR) for release on CD and via the Web in 2003.
Resumo:
The potential of Raman spectroscopy for the determination of meat quality attributes has been investigated using data from a set of 52 cooked beef samples, which were rated by trained taste panels. The Raman spectra, shear force and cooking loss were measured and PLS used to correlate the attributes with the Raman data. Good correlations and standard errors of prediction were found when the Raman data were used to predict the panels' rating of acceptability of texture (R-2 = 0.71, Residual Mean Standard Error of Prediction (RMSEP)% of the mean (mu) = 15%), degree of tenderness (R-2 = 0.65, RMSEP% of mu = 18%), degree of juiciness (R-2 = 0.62, RMSEP% of mu = 16%), and overall acceptability (R-2 = 0.67, RMSEP% of mu = 11%). In contrast, the mechanically determined shear force was poorly correlated with tenderness (R-2 = 0.15). Tentative interpretation of the plots of the regression coefficients suggests that the alpha-helix to beta-sheet ratio of the proteins and the hydrophobicity of the myofibrillar environment are important factors contributing to the shear force, tenderness, texture and overall acceptability of the beef. In summary, this work demonstrates that Raman spectroscopy can be used to predict consumer-perceived beef quality. In part, this overall success is due to the fact that the Raman method predicts texture and tenderness, which are the predominant factors in determining overall acceptability in the Western world. Nonetheless, it is clear that Raman spectroscopy has considerable potential as a method for non-destructive and rapid determination of beef quality parameters.