981 resultados para Set functions.
Resumo:
Currently, mass spectrometry-based metabolomics studies extend beyond conventional chemical categorization and metabolic phenotype analysis to understanding gene function in various biological contexts (e.g., mammalian, plant, and microbial). These novel utilities have led to many innovative discoveries in the following areas: disease pathogenesis, therapeutic pathway or target identification, the biochemistry of animal and plant physiological and pathological activities in response to diverse stimuli, and molecular signatures of host-pathogen interactions during microbial infection. In this review, we critically evaluate the representative applications of mass spectrometry-based metabolomics to better understand gene function in diverse biological contexts, with special emphasis on working principles, study protocols, and possible future development of this technique. Collectively, this review raises awareness within the biomedical community of the scientific value and applicability of mass spectrometry-based metabolomics strategies to better understand gene function, thus advancing this application's utility in a broad range of biological fields
Resumo:
Visitors to prison are generally innocent of committing crime, but their interaction with inmates has been studied as a possible incentive to reduce recidivism. The way visitors’ centres are currently designed takes in consideration mainly security principles and the needs of guards or prison management. The human experience of the relatives or friends aiming to provide emotional support to inmates is usually not considered; facilities have been designed with an approach that often discourages people from visiting. This paper discusses possible principles to design prison visitors’ centres taking in consideration practical needs, but also human factors. A comparative case study analysis of different secure typologies, like libraries, airports or children hospitals, provides suggestions about how to approach the design of prison in order to ensure the visitor is not punished for the crimes of those they are visiting.
Resumo:
This paper presents a comprehensive formal security framework for key derivation functions (KDF). The major security goal for a KDF is to produce cryptographic keys from a private seed value where the derived cryptographic keys are indistinguishable from random binary strings. We form a framework of five security models for KDFs. This consists of four security models that we propose: Known Public Inputs Attack (KPM, KPS), Adaptive Chosen Context Information Attack (CCM) and Adaptive Chosen Public Inputs Attack(CPM); and another security model, previously defined by Krawczyk [6], which we refer to as Adaptive Chosen Context Information Attack(CCS). These security models are simulated using an indistinguisibility game. In addition we prove the relationships between these five security models and analyse KDFs using the framework (in the random oracle model).
Resumo:
The importance of applying unsaturated soil mechanics to geotechnical engineering design has been well understood. However, the consumption of time and the necessity for a specific laboratory testing apparatus when measuring unsaturated soil properties have limited the application of unsaturated soil mechanics theories in practice. Although methods for predicting unsaturated soil properties have been developed, the verification of these methods for a wide range of soil types is required in order to increase the confidence of practicing engineers in using these methods. In this study, a new permeameter was developed to measure the hydraulic conductivity of unsaturated soils using the steady-state method and directly measured suction (negative pore-water pressure) values. The apparatus is instrumented with two tensiometers for the direct measurement of suction during the tests. The apparatus can be used to obtain the hydraulic conductivity function of sandy soil over a low suction range (0-10 kPa). Firstly, the repeatability of the unsaturated hydraulic conductivity measurement, using the new permeameter, was verified by conducting tests on two identical sandy soil specimens and obtaining similar results. The hydraulic conductivity functions of the two sandy soils were then measured during the drying and wetting processes of the soils. A significant hysteresis was observed when the hydraulic conductivity was plotted against the suction. However, the hysteresis effects were not apparent when the conductivity was plotted against the volumetric water content. Furthermore, the measured unsaturated hydraulic conductivity functions were compared with predictions using three different predictive methods that are widely incorporated into numerical software. The results suggest that these predictive methods are capable of capturing the measured behavior with reasonable agreement.
Resumo:
The question as to whether poser race affects the happy categorization advantage, the faster categorization of happy than of negative emotional expressions, has been answered inconsistently. Hugenberg (2005) found the happy categorization advantage only for own race faces whereas faster categorization of angry expressions was evident for other race faces. Kubota and Ito (2007) found a happy categorization advantage for both own race and other race faces. These results have vastly different implications for understanding the influence of race cues on the processing of emotional expressions. The current study replicates the results of both prior studies and indicates that face type (computer-generated vs. photographic), presentation duration, and especially stimulus set size influence the happy categorization advantage as well as the moderating effect of poser race.
Resumo:
Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.
Resumo:
Multi-Objective optimization for designing of a benchmark cogeneration system known as CGAM cogeneration system has been performed. In optimization approach, the thermoeconomic and Environmental aspects have been considered, simultaneously. The environmental objective function has been defined and expressed in cost terms. One of the most suitable optimization techniques developed using a particular class of search algorithms known as; Multi-Objective Particle Swarm Optimization (MOPSO) algorithm has been used here. This approach has been applied to find the set of Pareto optimal solutions with respect to the aforementioned objective functions. An example of fuzzy decision-making with the aid of Bellman-Zadeh approach has been presented and a final optimal solution has been introduced.
Resumo:
Historically, children in criminal justice proceedings were treated much the same as adults and subject to the same criminal justice processes as adults. Until the early twentieth century, children in Australia were even subjected to the same penalties as adults, including hard labour and corporal and capital punishment (Carrington & Pereira 2009). Until the mid-nineteenth century, there was no separate category of ’juvenile offender’ in Western legal systems and children as young as six years of age were incarcerated in Australian prisons (Cunneen & White 2007). It is widely acknowledged today, however, both in Australia and internationally, that juveniles should be subject to a system of criminal justice that is separate from the adult system and that recognises their inexperience and immaturity. As such, juveniles are typically dealt with separately from adults and treated less harshly than their adult counterparts. The United Nations’ (1985: 2) Standard Minimum Rules for the Administration of Juvenile Justice (the ‘Beijing Rules’) stress the importance of nations establishing a set of laws, rules and provisions specifically applicable to juvenile offenders and institutions and bodies entrusted with the functions of the administration of juvenile justice and designed to meet the varying needs of juvenile offenders, while protecting their basic rights. In each Australian jurisdiction, except Queensland, a juvenile is defined as a person aged between 10 and 17 years of age, inclusive. In Queensland, a juvenile is defined as a person aged between 10 and 16 years, inclusive. In all jurisdictions, the minimum age of criminal responsibility is 10 years. That is, children under 10 years of age cannot be held legally responsible for their actions.
Resumo:
Gemcitabine is indicated in combination with cisplatin as first-line therapy for solid tumours including non-small cell lung cancer (NSCLC), bladder cancer and mesothelioma. Gemcitabine is an analogue of pyrimidine cytosine and functions as an anti-metabolite. Structurally, however, gemcitabine has similarities to 5-aza-2-deoxycytidine (decitabine/Dacogen®), a DNA methyltransferase inhibitor (DNMTi). NSCLC, mesothelioma and prostate cancer cell lines were treated with decitabine and gemcitabine. Reactivation of epigenetically silenced genes was examined by RT-PCR/qPCR. DNA methyltransferase activity in nuclear extracts and recombinant proteins was measured using a DNA methyltransferase assay, and alterations in DNA methylation status were examined using methylation-specific PCR (MS-PCR) and pyrosequencing. We observe a reactivation of several epigenetically silenced genes including GSTP1, IGFBP3 and RASSF1A. Gemcitabine functionally inhibited DNA methyltransferase activity in both nuclear extracts and recombinant proteins. Gemcitabine dramatically destabilised DNMT1 protein. However, DNA CpG methylation was for the most part unaffected by gemcitabine. In conclusion, gemcitabine both inhibits and destabilises DNA methyltransferases and reactivates epigenetically silenced genes having activity equivalent to decitabine at concentrations significantly lower than those achieved in the treatment of patients with solid tumours. This property may contribute to the anticancer activity of gemcitabine.
Resumo:
In eukaryotes, numerous complex sub-cellular structures exist. The majority of these are delineated by membranes. Many proteins are trafficked to these in order to be able to carry out their correct physiological function. Assigning the sub-cellular location of a protein is of paramount importance to biologists in the elucidation of its role and in the refinement of knowledge of cellular processes by tracing certain activities to specific organelles. Membrane proteins are a key set of proteins as these form part of the boundary of the organelles and represent many important functions such as transporters, receptors, and trafficking. They are, however, some of the most challenging proteins to work with due to poor solubility, a wide concentration range within the cell and inaccessibility to many of the tools employed in proteomics studies. This review focuses on membrane proteins with particular emphasis on sub-cellular localization in terms of methodologies that can be used to determine the accurate location of membrane proteins to organelles. We also discuss what is known about the membrane protein cohorts of major organelles.
Resumo:
Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
It is widely acknowledged that effective asset management requires an interdisciplinary approach, in which synergies should exist between traditional disciplines such as: accounting, engineering, finance, humanities, logistics, and information systems technologies. Asset management is also an important, yet complex business practice. Business process modelling is proposed as an approach to manage the complexity of asset management through the modelling of asset management processes. A sound foundation for the systematic application and analysis of business process modelling in asset management is, however, yet to be developed. Fundamentally, a business process consists of activities (termed functions), events/states, and control flow logic. As both events/states and control flow logic are somewhat dependent on the functions themselves, it is a logical step to first identify the functions within a process. This research addresses the current gap in knowledge by developing a method to identify functions common to various industry types (termed core functions). This lays the foundation to extract such functions, so as to identify both commonalities and variation points in asset management processes. This method describes the use of a manual text mining and a taxonomy approach. An example is presented.
Resumo:
Many cell types form clumps or aggregates when cultured in vitro through a variety of mechanisms including rapid cell proliferation, chemotaxis, or direct cell-to-cell contact. In this paper we develop an agent-based model to explore the formation of aggregates in cultures where cells are initially distributed uniformly, at random, on a two-dimensional substrate. Our model includes unbiased random cell motion, together with two mechanisms which can produce cell aggregates: (i) rapid cell proliferation, and (ii) a biased cell motility mechanism where cells can sense other cells within a finite range, and will tend to move towards areas with higher numbers of cells. We then introduce a pair-correlation function which allows us to quantify aspects of the spatial patterns produced by our agent-based model. In particular, these pair-correlation functions are able to detect differences between domains populated uniformly at random (i.e. at the exclusion complete spatial randomness (ECSR) state) and those where the proliferation and biased motion rules have been employed - even when such differences are not obvious to the naked eye. The pair-correlation function can also detect the emergence of a characteristic inter-aggregate distance which occurs when the biased motion mechanism is dominant, and is not observed when cell proliferation is the main mechanism of aggregate formation. This suggests that applying the pair-correlation function to experimental images of cell aggregates may provide information about the mechanism associated with observed aggregates. As a proof of concept, we perform such analysis for images of cancer cell aggregates, which are known to be associated with rapid proliferation. The results of our analysis are consistent with the predictions of the proliferation-based simulations, which supports the potential usefulness of pair correlation functions for providing insight into the mechanisms of aggregate formation.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.