224 resultados para Pseudo-Riemannian metric
Resumo:
Membrane proteins play important roles in many biochemical processes and are also attractive targets of drug discovery for various diseases. The elucidation of membrane protein types provides clues for understanding the structure and function of proteins. Recently we developed a novel system for predicting protein subnuclear localizations. In this paper, we propose a simplified version of our system for predicting membrane protein types directly from primary protein structures, which incorporates amino acid classifications and physicochemical properties into a general form of pseudo-amino acid composition. In this simplified system, we will design a two-stage multi-class support vector machine combined with a two-step optimal feature selection process, which proves very effective in our experiments. The performance of the present method is evaluated on two benchmark datasets consisting of five types of membrane proteins. The overall accuracies of prediction for five types are 93.25% and 96.61% via the jackknife test and independent dataset test, respectively. These results indicate that our method is effective and valuable for predicting membrane protein types. A web server for the proposed method is available at http://www.juemengt.com/jcc/memty_page.php
Resumo:
Additive manufacturing forms a potential route towards economically viable production of cellular constructs for tissue engineering. Hydrogels are a suitable class of materials for cell delivery and 3D culture, but are generally unsuitable as construction materials. Gelatine-methacrylamide is an example of such a hydrogel system widely used in the field of tissue engineering, e.g. for cartilage and cardiovascular applications. Here we show that by the addition of gellan gum to gelatine-methacrylamide and tailoring salt concentrations, rheological properties such as pseudo-plasticity and yield stress can be optimised towards gel dispensing for additive manufacturing processes. In the hydrogel formulation, salt is partly substituted by mannose to obtain isotonicity and prevent a reduction in cell viability. With this, the potential of this new bioink for additive tissue manufacturing purposes is demonstrated.
Resumo:
Mixed convection laminar two-dimensional boundary-layer flow of non-Newtonian pseudo-plastic fluids is investigated from a horizontal circular cylinder with uniform surface heat flux using a modified power-law viscosity model, that contains no unrealistic limits of zero or infinite viscosity; consequently, no irremovable singularities are introduced into boundary-layer formulations for such fluids. The governing boundary layer equations are transformed into a non-dimensional form and the resulting nonlinear systems of partial differential equations are solved numerically applying marching order implicit finite difference method with double sweep technique. Numerical results are presented for the case of shear-thinning fluids in terms of the fluid temperature distributions, rate of heat transfer in terms of the local Nusselt number.
Resumo:
The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.
Resumo:
Gene expression is arguably the most important indicator of biological function. Thus identifying differentially expressed genes is one of the main aims of high throughout studies that use microarray and RNAseq platforms to study deregulated cellular pathways. There are many tools for analysing differentia gene expression from transciptomic datasets. The major challenge of this topic is to estimate gene expression variance due to the high amount of ‘background noise’ that is generated from biological equipment and the lack of biological replicates. Bayesian inference has been widely used in the bioinformatics field. In this work, we reveal that the prior knowledge employed in the Bayesian framework also helps to improve the accuracy of differential gene expression analysis when using a small number of replicates. We have developed a differential analysis tool that uses Bayesian estimation of the variance of gene expression for use with small numbers of biological replicates. Our method is more consistent when compared to the widely used cyber-t tool that successfully introduced the Bayesian framework to differential analysis. We also provide a user-friendly web based Graphic User Interface for biologists to use with microarray and RNAseq data. Bayesian inference can compensate for the instability of variance caused when using a small number of biological replicates by using pseudo replicates as prior knowledge. We also show that our new strategy to select pseudo replicates will improve the performance of the analysis. - See more at: http://www.eurekaselect.com/node/138761/article#sthash.VeK9xl5k.dpuf
Resumo:
Estimating the economic burden of injuries is important for setting priorities, allocating scarce health resources and planning cost-effective prevention activities. As a metric of burden, costs account for multiple injury consequences—death, severity, disability, body region, nature of injury—in a single unit of measurement. In a 1989 landmark report to the US Congress, Rice et al1 estimated the lifetime costs of injuries in the USA in 1985. By 2000, the epidemiology and burden of injuries had changed enough that the US Congress mandated an update, resulting in a book on the incidence and economic burden of injury in the USA.2 To make these findings more accessible to the larger realm of scientists and practitioners and to provide a template for conducting the same economic burden analyses in other countries and settings, a summary3 was published in Injury Prevention. Corso et al reported that, between 1985 and 2000, injury rates declined roughly 15%. The estimated lifetime cost of these injuries declined 20%, totalling US$406 billion, including US$80 billion in medical costs and US$326 billion in lost productivity. While incidence reflects problem size, the relative burden of injury is better expressed using costs.
Resumo:
Ship seakeeping operability refers to the quantification of motion performance in waves relative to mission requirements. This is used to make decisions about preferred vessel designs, but it can also be used as comprehensive assessment of the benefits of ship-motion-control systems. Traditionally, operability computation aggregates statistics of motion computed over over the envelope of likely environmental conditions in order to determine a coefficient in the range from 0 to 1 called operability. When used for assessment of motion-control systems, the increase of operability is taken as the key performance indicator. The operability coefficient is often given the interpretation of the percentage of time operable. This paper considers an alternative probabilistic approach to this traditional computation of operability. It characterises operability not as a number to which a frequency interpretation is attached, but as a hypothesis that a vessel will attain the desired performance in one mission considering the envelope of likely operational conditions. This enables the use of Bayesian theory to compute the probability of that this hypothesis is true conditional on data from simulations. Thus, the metric considered is the probability of operability. This formulation not only adheres to recent developments in reliability and risk analysis, but also allows incorporating into the analysis more accurate descriptions of ship-motion-control systems since the analysis is not limited to linear ship responses in the frequency domain. The paper also discusses an extension of the approach to the case of assessment of increased levels of autonomy for unmanned marine craft.
Resumo:
Purpose The post-illumination pupil response (PIPR) has been quantified using four metrics, but the spectral sensitivity of only one is known; here we determine the other three. To optimize the human PIPR measurement, we determine the protocol producing the largest PIPR, the duration of the PIPR, and the metric(s) with the lowest coefficient of variation. Methods The consensual pupil light reflex (PLR) was measured with a Maxwellian view pupillometer. - Experiment 1: Spectral sensitivity of four PIPR metrics [plateau, 6 s, area under curve (AUC) early and late recovery] was determined from a criterion PIPR to a 1s pulse and fitted with Vitamin A1 nomogram (λmax = 482nm). - Experiment 2: The PLR was measured as a function of three stimulus durations (1s, 10s, 30s), five irradiances spanning low to high melanopsin excitation levels (retinal irradiance: 9.8 to 14.8 log quanta.cm-2.s-1), and two wavelengths, one with high (465nm) and one with low (637nm) melanopsin excitation. Intra and inter-individual coefficients of variation (CV) were calculated. Results The melanopsin (opn4) photopigment nomogram adequately describes the spectral sensitivity of all four PIPR metrics. The PIPR amplitude was largest with 1s short wavelength pulses (≥ 12.8 log quanta.cm-2.s-1). The plateau and 6s PIPR showed the least intra and inter-individual CV (≤ 0.2). The maximum duration of the sustained PIPR was 83.0±48.0s (mean±SD) for 1s pulses and 180.1±106.2s for 30s pulses (465nm; 14.8 log quanta.cm-2.s-1). Conclusions All current PIPR metrics provide a direct measure of the intrinsic melanopsin photoresponse. To measure progressive changes in melanopsin function in disease, we recommend that the PIPR be measured using short duration pulses (e.g., ≤ 1s) with high melanopsin excitation and analyzed with plateau and/or 6s metrics. Our PIPR duration data provide a baseline for the selection of inter-stimulus intervals between consecutive pupil testing sequences.
Resumo:
Purpose The post-illumination pupil response (PIPR) has been quantified in the literature by four metrics. The spectral sensitivity of only one metric is known and this study quantifies the other three. To optimize the measurement of the PIPR in humans, we also determine the stimulus protocol producing the largest PIPR, the duration of the PIPR, and the metric(s) with the lowest coefficient of variation. Methods The consensual pupil light reflex (PLR) was measured with a Maxwellian view pupillometer (35.6° diameter stimulus). - Experiment 1: Spectral sensitivity of four PIPR metrics [plateau, 6 s, area under curve (AUC) early and late recovery] was determined from a criterion PIPR (n = 2 participants) to a 1 s pulse at five wavelengths (409-592nm) and fitted with Vitamin A nomogram (ƛmax = 482 nm). - Experiment 2: The PLR was measured in five healthy participants [29 to 42 years (mean = 32.6 years)] as a function of three stimulus durations (1 s, 10 s, 30 s), five irradiances spanning low to high melanopsin excitation levels (retinal irradiance: 9.8 to 14.8 log quanta.cm-2.s-1), and two wavelengths, one with high (465 nm) and one with low (637 nm) melanopsin excitation. Intra and inter-individual coefficients of variation (CV) were calculated. Results The melanopsin (opn4) photopigment nomogram adequately described the spectral sensitivity derived from all four PIPR metrics. The largest PIPR amplitude was observed with 1 s short wavelength pulses (retinal irradiance ≥ 12.8 log quanta.cm-2.s-1). Of the 4 PIPR metrics, the plateau and 6 s PIPR showed the least intra and inter-individual CV (≤ 0.2). The maximum duration of the sustained PIPR was 83.4 ± 48.0 s (mean ± SD) for 1 s pulses and 180.1 ± 106.2 s for 30 s pulses (465 nm; 14.8 log quanta.cm-2.s-1). Conclusions All current PIPR metrics provide a direct measure of intrinsic melanopsin retinal ganglion cell function. To measure progressive changes in melanopsin function in disease, we recommend that the intrinsic melanopsin response should be measured using a 1 s pulse with high melanopsin excitation and the PIPR should be analyzed with the plateau and/or 6 s metrics. That the PIPR can have a sustained constriction for as long as 3 minutes, our PIPR duration data provide a baseline for the selection of inter-stimulus intervals between consecutive pupil testing sequences.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
A 'pseudo-Bayesian' interpretation of standard errors yields a natural induced smoothing of statistical estimating functions. When applied to rank estimation, the lack of smoothness which prevents standard error estimation is remedied. Efficiency and robustness are preserved, while the smoothed estimation has excellent computational properties. In particular, convergence of the iterative equation for standard error is fast, and standard error calculation becomes asymptotically a one-step procedure. This property also extends to covariance matrix calculation for rank estimates in multi-parameter problems. Examples, and some simple explanations, are given.
Resumo:
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions be- come similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that prob- ability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this man- ifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspace-based method for unsupervised domain adaptation, and show that both these manifold-based techniques outperform the cor- responding approaches based on the MMD. Furthermore, we show that our subspace-based approach yields state-of- the-art results on a standard object recognition benchmark.
Resumo:
Traffic incidents are recognised as one of the key sources of non-recurrent congestion that often leads to reduction in travel time reliability (TTR), a key metric of roadway performance. A method is proposed here to quantify the impacts of traffic incidents on TTR on freeways. The method uses historical data to establish recurrent speed profiles and identifies non-recurrent congestion based on their negative impacts on speeds. The locations and times of incidents are used to identify incidents among non-recurrent congestion events. Buffer time is employed to measure TTR. Extra buffer time is defined as the extra delay caused by traffic incidents. This reliability measure indicates how much extra travel time is required by travellers to arrive at their destination on time with 95% certainty in the case of an incident, over and above the travel time that would have been required under recurrent conditions. An extra buffer time index (EBTI) is defined as the ratio of extra buffer time to recurrent travel time, with zero being the best case (no delay). A Tobit model is used to identify and quantify factors that affect EBTI using a selected freeway segment in the Southeast Queensland, Australia network. Both fixed and random parameter Tobit specifications are tested. The estimation results reveal that models with random parameters offer a superior statistical fit for all types of incidents, suggesting the presence of unobserved heterogeneity across segments. What factors influence EBTI depends on the type of incident. In addition, changes in TTR as a result of traffic incidents are related to the characteristics of the incidents (multiple vehicles involved, incident duration, major incidents, etc.) and traffic characteristics.
Resumo:
This 'project' investigates Janet Cardiff's Whispering Room. It examines how Cardiff deconstructs the privileging of the visual over all other corporeal senses in her work, the Whispering Room. Using sound as a fulcrum, Cardiff explores the links between subjects, collective narratives, memories, experiences and performances. Janet Cardiff destabilizes time and space and fractures the continuum through the use of sound. My 'project' celebrates sound as a transgressive medium — sound not as a gendered medium but as a vehicle in which to speak (to) gender. It explores how sound can destabilize notions of perception and reception and question art and museal practices. In the process this 'project' reveals the complexity of interpreting and representing art as an object. My aim is to reflect the very intertextual and expressionist collage that Cardiff has created in Whispering Room in my own text. Cardiff solicits the viewer's intimacy and participation. Whispering Room is a physical yet metonymic space in which Cardiff creates a place for performatvity, experience, memory, desire and speech, thus she opens up a space for the utterance and performance of the viewer. Viewers construct and create meaning/s for themselves within this mnemonic space by digging up their own memories, desires and reveries. The strength of Cardiff's work is that it relies on a viewer to perform, a body to trigger the pseudo-spectacle and a voice to interrupt the whispers. One might ask of Whispering Room where the illusionistic space begins and where the physical space ends. This 'project' investigates how in Whispering Room there is no one experience but many experiences.
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.