92 resultados para Replacing strikers
Resumo:
We investigated memories of room-sized spatial layouts learned by sequentially or simultaneously viewing objects from a stationary position. In three experiments, sequential viewing (one or two objects at a time) yielded subsequent memory performance that was equivalent or superior to simultaneous viewing of all objects, even though sequential viewing lacked direct access to the entire layout. This finding was replicated by replacing sequential viewing with directed viewing in which all objects were presented simultaneously and participants’ attention was externally focused on each object sequentially, indicating that the advantage of sequential viewing over simultaneous viewing may have originated from focal attention to individual object locations. These results suggest that memory representation of object-to-object relations can be constructed efficiently by encoding each object location separately, when those locations are defined within a single spatial reference system. These findings highlight the importance of considering object presentation procedures when studying spatial learning mechanisms.
Resumo:
In 2006, Gaurav Gupta and Josef Pieprzyk presented an attack on the branch-based software watermarking scheme proposed by Ginger Myles and Hongxia Jin in 2005. The software watermarking model is based on replacing jump instructions or unconditional branch statements (UBS) by calls to a fingerprint branch function (FBF) that computes the correct target address of the UBS as a function of the generated fingerprint and integrity check. If the program is tampered with, the fingerprint and/or integrity checks change and the target address is not computed correctly. Gupta and Pieprzyk's attack uses debugger capabilities such as register and address lookup and breakpoints to minimize the requirement to manually inspect the software. Using these resources, the FBF and calls to the same is identified, correct displacement values are generated and calls to FBF are replaced by the original UBS transferring control of the attack to the correct target instruction. In this paper, we propose a watermarking model that provides security against such debugging attacks. Two primary measures taken are shifting the stack pointer modification operation from the FBF to the individual UBSs, and coding the stack pointer modification in the same language as that of the rest of the code rather than assembly language to avoid conspicuous contents. The manual component complexity increases from O(1) in the previous scheme to O(n) in our proposed scheme.
Resumo:
Singapore is located at the equator, with abundant supply of solar radiation, relatively high ambient temperature and relative humidity throughout the year. The meteorological conditions of Singapore are favourable for efficient operation of solar energy based systems. Solar assisted heat pump systems are built on the roof-top of National University of Singapore’s Faculty of Engineering. The objectives of this study include the design and performance evaluation of a solar assisted heat-pump system for water desalination, water heating and drying of clothes. Using MATLAB programming language, a 2-dimensional simulation model has been developed to conduct parametric studies on the system. The system shows good prospect to be implemented in both industrial and residential applications and would give new opportunities in replacing conventional energy sources with green renewable energy.
Resumo:
Commercially viable carbon–neutral biodiesel production from microalgae has potential for replacing depleting petroleum diesel. The process of biodiesel production from microalgae involves harvesting, drying and extraction of lipids which are energy- and cost-intensive processes. The development of effective large-scale lipid extraction processes which overcome the complexity of microalgae cell structure is considered one of the most vital requirements for commercial production. Thus the aim of this work was to investigate suitable extraction methods with optimised conditions to progress opportunities for sustainable microalgal biodiesel production. In this study, the green microalgal species consortium, Tarong polyculture was used to investigate lipid extraction with hexane (solvent) under high pressure and variable temperature and biomass moisture conditions using an Accelerated Solvent Extraction (ASE) method. The performance of high pressure solvent extraction was examined over a range of different process and sample conditions (dry biomass to water ratios (DBWRs): 100%, 75%, 50% and 25% and temperatures from 70 to 120 ºC, process time 5–15 min). Maximum total lipid yields were achieved at 50% and 75% sample dryness at temperatures of 90–120 ºC. We show that individual fatty acids (Palmitic acid C16:0; Stearic acid C18:0; Oleic acid C18:1; Linolenic acid C18:3) extraction optima are influenced by temperature and sample dryness, consequently affecting microalgal biodiesel quality parameters. Higher heating values and kinematic viscosity were compliant with biodiesel quality standards under all extraction conditions used. Our results indicate that biodiesel quality can be positively manipulated by selecting process extraction conditions that favour extraction of saturated and mono-unsaturated fatty acids over optimal extraction conditions for polyunsaturated fatty acids, yielding positive effects on cetane number and iodine values. Exceeding biodiesel standards for these two parameters opens blending opportunities with biodiesels that fall outside the minimal cetane and maximal iodine values.
Resumo:
We present a machine learning model that predicts a structural disruption score from a protein s primary structure. SCHEMA was introduced by Frances Arnold and colleagues as a method for determining putative recombination sites of a protein on the basis of the full (PDB) description of its structure. The present method provides an alternative to SCHEMA that is able to determine the same score from sequence data only. Circumventing the need for resolving the full structure enables the exploration of yet unresolved and even hypothetical sequences for protein design efforts. Deriving the SCHEMA score from a primary structure is achieved using a two step approach: first predicting a secondary structure from the sequence and then predicting the SCHEMA score from the predicted secondary structure. The correlation coefficient for the prediction is 0.88 and indicates the feasibility of replacing SCHEMA with little loss of precision.
Resumo:
Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1–28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.
Resumo:
Cane fibre content has increased over the past ten years. Some of that increase can be attributed to new varieties selected for release. This paper reviews the existing methods for quantifying the fibre characteristics of a variety, including fibre content and fibre quality measurements – shear strength, impact resistance and short fibre content. The variety selection process is presented and it is reported that fibre content has zero weighting in the current selection index. An updated variety selection approach is proposed, potentially replacing the existing selection process relating to fibre. This alternative approach involves the use of a more complex mill area level model that accounts for harvesting, transport and processing equipment, taking into account capacity, efficiency and operational impacts, along with the end use for the bagasse. The approach will ultimately determine a net economic value for the variety. The methodology lends itself to a determination of the fibre properties that have a significant impact on the economic value so that variety tests can better target the critical properties. A low-pressure compression test is proposed as a good test to provide an assessment of the impact of a variety on milling capacity. NIR methodology is proposed as a technology to lead to a more rapid assessment of fibre properties, and hence the opportunity to more comprehensively test for fibre impacts at an earlier stage of variety development.
Resumo:
Empirical evidence shows that repositories of business process models used in industrial practice contain significant amounts of duplication. This duplication arises for example when the repository covers multiple variants of the same processes or due to copy-pasting. Previous work has addressed the problem of efficiently retrieving exact clones that can be refactored into shared subprocess models. This article studies the broader problem of approximate clone detection in process models. The article proposes techniques for detecting clusters of approximate clones based on two well-known clustering algorithms: DBSCAN and Hi- erarchical Agglomerative Clustering (HAC). The article also defines a measure of standardizability of an approximate clone cluster, meaning the potential benefit of replacing the approximate clones with a single standardized subprocess. Experiments show that both techniques, in conjunction with the proposed standardizability measure, accurately retrieve clusters of approximate clones that originate from copy-pasting followed by independent modifications to the copied fragments. Additional experiments show that both techniques produce clusters that match those produced by human subjects and that are perceived to be standardizable.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
2010 is expected to see the publication of a new UK Code on Corporate Governance, replacing the Combined Code. Why is a new code being issued? What significant changes are proposed? WIll it change the corporate governance world?
Resumo:
Anisotropy of transverse proton spin relaxation in collagen-rich tissues like cartilage and tendon is a well-known phenomenon that manifests itself as the "magic-angle" effect in magnetic resonance images of these tissues. It is usually attributed to the non-zero averaging of intra-molecular dipolar interactions in water molecules bound to oriented collagen fibers. One way to manipulate the contributions of these interactions to spin relaxation is by partially replacing the water in the cartilage sample with deuterium oxide. It is known that dipolar interactions in deuterated solutions are weaker, resulting in a decrease in proton relaxation rates. In this work, we investigate the effects of deuteration on the longitudinal and the isotropic and anisotropic contributions to transverse relaxation of water protons in bovine articular cartilage. We demonstrate that the anisotropy of transverse proton spin relaxation in articular cartilage is independent of the degree of deuteration, bringing into question some of the assumptions currently held over the origins of relaxation anisotropy in oriented tissues.
Resumo:
Managerial changes to Australian universities have had considerable impact on employees. In this paper we consider some of these changes and apply a theory known as the democratic deficit to them. This theory was developed from the democratic critique of managerialism, as it has been applied in the public sector in countries with Westminster-type political systems. This deficit covers the weakening of accountability through politicisation, the denial of public values through the use of private sector performance practices, and the hollowing out of the state through the contracting out and privatisation of public goods and services, and the redefinition of citizens as customers and clients. We suggest that the increased power of managers, expansion of the audit culture, and the extensive use of contract employment seem to be weakening the democratic culture and role of universities in part by replacing accountability as responsibility with accountability as responsiveness.
Resumo:
This paper addresses the development of trust in the use of Open Data through incorporation of appropriate authentication and integrity parameters for use by end user Open Data application developers in an architecture for trustworthy Open Data Services. The advantages of this architecture scheme is that it is far more scalable, not another certificate-based hierarchy that has problems with certificate revocation management. With the use of a Public File, if the key is compromised: it is a simple matter of the single responsible entity replacing the key pair with a new one and re-performing the data file signing process. Under this proposed architecture, the the Open Data environment does not interfere with the internal security schemes that might be employed by the entity. However, this architecture incorporates, when needed, parameters from the entity, e.g. person who authorized publishing as Open Data, at the time that datasets are created/added.
Resumo:
To the Editor—Diphtheria-tetanus-pertussis whole-cell (DTwP) and acellular (DTaP) vaccines are the 2 main pertussis-contained vaccines. DTwP, developed in the 1930s, has contributed to the reduction of pertussis, but has often been associated with vaccine-related adverse reactions (ARs) [1]. This had severely affected the public confidence in immunization programs, followed by decreased vaccine coverage and pertussis outbreaks in many industrialized countries in the 1970s [2]. DTaP, which was developed in the 1980s and replaced DTwP in developed countries in the 1990s, has been associated with fewer ARs due to removal/reduction of endotoxin [1]. China began replacing DTwP with DTaP in its national immunization programs in December 2007, and its passive Adverse Events Following Immunization (AEFI) surveillance system was established in 2005 [3]. The Intergovernmental Panel on Climate Change Fifth Assessment Report indicates that the planet is warming at...
Resumo:
In this chapter we use Bernstein’s (2000) model of pedagogic rights to examine the learning experiences for non-Indigenous teachers in two reconciliation projects. In the context within which we write, reconciliation is the process of establishing a culture of mutual respect between Aboriginal and Torres Strait Islander peoples and non-Indigenous Australians. In 1991, the Royal Commission into Aboriginal Deaths in Custody linked the continuation of racism in Australian society to the weak coverage of Aboriginal and Torres Strait Islander content in the school curriculum (Reconciliation Australia 2010). Nearly two decades later, the Melbourne Declaration on Educational Goals for Young Australians issued by the council of Federal, State and Territory Ministers of Education proclaimed that curriculum should enable all students to ‘understand and acknowledge the value of Indigenous cultures and possess the knowledge, skills and understanding to contribute to, and benefit from reconciliation between Indigenous and non-Indigenous Australians’ (MCEETYA 2008, 9). Education holds out promise not only of better life chances for Indigenous young people, but also of replacing myths with understanding and tackling prejudice and racism within the non-Indigenous population. Bernstein’s (2000) model of pedagogic rights promises some purchase on this pedagogic work by providing concepts for looking systematically at the participation of non-Indigenous teachers in education. As observed by Frandji and Vitale (Chapter 2, this volume), the model is not sufficient to achieve a democratic reality, ‘but simply provides a basis for problematizing reality and considering possibilities’.