357 resultados para elliptic curve discrete logarithm problem
Resumo:
Reliable quantitative analysis of white matter connectivity in the brain is an open problem in neuroimaging, with common solutions requiring tools for fiber tracking, tractography segmentation and estimation of intersubject correspondence. This paper proposes a novel, template matching approach to the problem. In the proposed method, a deformable fiber-bundle model is aligned directly with the subject tensor field, skipping the fiber tracking step. Furthermore, the use of a common template eliminates the need for tractography segmentation and defines intersubject shape correspondence. The method is validated using phantom DTI data and applications are presented, including automatic fiber-bundle reconstruction and tract-based morphometry. © 2009 Elsevier Inc. All rights reserved.
Resumo:
Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
This chapter addresses opportunities for problem posing in developing young children’s statistical literacy, with a focus on student-directed investigations. Although the notion of problem posing has broadened in recent years, there nevertheless remains limited research on how problem posing can be integrated within the regular mathematics curriculum, especially in the areas of statistics and probability. The chapter first reviews briefly aspects of problem posing that have featured in the literature over the years. Consideration is next given to the importance of developing children’s statistical literacy in which problem posing is an inherent feature. Some findings from a school playground investigation conducted in four, fourth-grade classes illustrate the different ways in which children posed investigative questions, how they made predictions about their outcomes and compared these with their findings, and the ways in which they chose to represent their findings.
Resumo:
Engineering-based modeling activities provide a rich source of meaningful situations that capitalize on and extend students’ routine learning. By integrating such activities within existing curricula, students better appreciate how their school learning in mathematics and science applies to problems in the outside world...
Resumo:
Research on problem solving in the mathematics curriculum has spanned many decades, yielding pendulum-like swings in recommendations on various issues. Ongoing debates concern the effectiveness of teaching general strategies and heuristics, the role of mathematical content (as the means versus the learning goal of problem solving), the role of context, and the proper emphasis on the social and affective dimensions of problem solving (e.g., Lesh & Zawojewski, 2007; Lester, 2013; Lester & Kehle, 2003; Schoenfeld, 1985, 2008; Silver, 1985). Various scholarly perspectives—including cognitive and behavioral science, neuroscience, the discipline of mathematics, educational philosophy, and sociocultural stances—have informed these debates, often generating divergent resolutions. Perhaps due to this uncertainty, educators’ efforts over the years to improve students’ mathematical problem-solving skills have had disappointing results. Qualitative and quantitative studies consistently reveal mathematics students’ struggles to solve problems more significant than routine exercises (OECD, 2014; Boaler, 2009)...
What triggers problem recognition? An exploration on young Australian male problematic online gamers
Resumo:
Help-seeking is a complex decision-making process that first begins with problem recognition. However, little is understood about the conceptualisation of the helpseeking process and the triggers of problem recognition. This research proposes the use of the Critical Incident Technique (CIT) to examine and classify incidents that serve as key triggers of problem recognition among young Australian male problematic online gamers. The research provides a classification of five different types of triggers that will aid social marketers into developing effective early detection, prevention and treatment focused social marketing interventions.
Resumo:
Concept mapping involves determining relevant concepts from a free-text input, where concepts are defined in an external reference ontology. This is an important process that underpins many applications for clinical information reporting, derivation of phenotypic descriptions, and a number of state-of-the-art medical information retrieval methods. Concept mapping can be cast into an information retrieval (IR) problem: free-text mentions are treated as queries and concepts from a reference ontology as the documents to be indexed and retrieved. This paper presents an empirical investigation applying general-purpose IR techniques for concept mapping in the medical domain. A dataset used for evaluating medical information extraction is adapted to measure the effectiveness of the considered IR approaches. Standard IR approaches used here are contrasted with the effectiveness of two established benchmark methods specifically developed for medical concept mapping. The empirical findings show that the IR approaches are comparable with one benchmark method but well below the best benchmark.
Resumo:
Messenger RNAs (mRNAs) can be repressed and degraded by small non-coding RNA molecules. In this paper, we formulate a coarsegrained Markov-chain description of the post-transcriptional regulation of mRNAs by either small interfering RNAs (siRNAs) or microRNAs (miRNAs). We calculate the probability of an mRNA escaping from its domain before it is repressed by siRNAs/miRNAs via cal- culation of the mean time to threshold: when the number of bound siRNAs/miRNAs exceeds a certain threshold value, the mRNA is irreversibly repressed. In some cases,the analysis can be reduced to counting certain paths in a reduced Markov model. We obtain explicit expressions when the small RNA bind irreversibly to the mRNA and we also discuss the reversible binding case. We apply our models to the study of RNA interference in the nucleus, examining the probability of mRNAs escaping via small nuclear pores before being degraded by siRNAs. Using the same modelling framework, we further investigate the effect of small, decoy RNAs (decoys) on the process of post-transcriptional regulation, by studying regulation of the tumor suppressor gene, PTEN : decoys are able to block binding sites on PTEN mRNAs, thereby educing the number of sites available to siRNAs/miRNAs and helping to protect it from repression. We calculate the probability of a cytoplasmic PTEN mRNA translocating to the endoplasmic reticulum before being repressed by miRNAs. We support our results with stochastic simulations
Resumo:
Background The Palliative Care Problem Severity Score is a clinician-rated tool to assess problem severity in four palliative care domains (pain, other symptoms, psychological/spiritual, family/carer problems) using a 4-point categorical scale (absent, mild, moderate, severe). Aim To test the reliability and acceptability of the Palliative Care Problem Severity Score. Design: Multi-centre, cross-sectional study involving pairs of clinicians independently rating problem severity using the tool. Setting/participants Clinicians from 10 Australian palliative care services: 9 inpatient units and 1 mixed inpatient/community-based service. Results A total of 102 clinicians participated, with almost 600 paired assessments completed for each domain, involving 420 patients. A total of 91% of paired assessments were undertaken within 2 h. Strength of agreement for three of the four domains was moderate: pain (Kappa = 0.42, 95% confidence interval = 0.36 to 0.49); psychological/spiritual (Kappa = 0.48, 95% confidence interval = 0.42 to 0.54); family/carer (Kappa = 0.45, 95% confidence interval = 0.40 to 0.52). Strength of agreement for the remaining domain (other symptoms) was fair (Kappa = 0.38, 95% confidence interval = 0.32 to 0.45). Conclusion The Palliative Care Problem Severity Score is an acceptable measure, with moderate reliability across three domains. Variability in inter-rater reliability across sites and participant feedback indicate that ongoing education is required to ensure that clinicians understand the purpose of the tool and each of its domains. Raters familiar with the patient they were assessing found it easier to assign problem severity, but this did not improve inter-rater reliability.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
In this paper, the trajectory tracking control of an autonomous underwater vehicle (AUVs) in six-degrees-of-freedom (6-DOFs) is addressed. It is assumed that the system parameters are unknown and the vehicle is underactuated. An adaptive controller is proposed, based on Lyapunov׳s direct method and the back-stepping technique, which interestingly guarantees robustness against parameter uncertainties. The desired trajectory can be any sufficiently smooth bounded curve parameterized by time even if consist of straight line. In contrast with the majority of research in this field, the likelihood of actuators׳ saturation is considered and another adaptive controller is designed to overcome this problem, in which control signals are bounded using saturation functions. The nonlinear adaptive control scheme yields asymptotic convergence of the vehicle to the reference trajectory, in the presence of parametric uncertainties. The stability of the presented control laws is proved in the sense of Lyapunov theory and Barbalat׳s lemma. Efficiency of presented controller using saturation functions is verified through comparing numerical simulations of both controllers.
Resumo:
Curves are a common feature of road infrastructure; however crashes on road curves are associated with increased risk of injury and fatality to vehicle occupants. Countermeasures require the identification of contributing factors. However, current approaches to identifying contributors use traditional statistical methods and have not used self-reported narrative claim to identify factors related to the driver, vehicle and environment in a systemic way. Text mining of 3434 road-curve crash claim records filed between 1 January 2003 and 31 December 2005 at a major insurer in Queensland, Australia, was undertaken to identify risk levels and contributing factors. Rough set analysis was used on insurance claim narratives to identify significant contributing factors to crashes and their associated severity. New contributing factors unique to curve crashes were identified (e.g., tree, phone, over-steer) in addition to those previously identified via traditional statistical analysis of Police and licensing authority records. Text mining is a novel methodology to improve knowledge related to risk and contributing factors to road-curve crash severity. Future road-curve crash countermeasures should more fully consider the interrelationships between environment, the road, the driver and the vehicle, and education campaigns in particular could highlight the increased risk of crash on road-curves.
Resumo:
This study investigated within-person relationships between daily problem solving demands, selection, optimization, and compensation (SOC) strategy use, job satisfaction, and fatigue at work. Based on conservation of resources theory, it was hypothesized that high SOC strategy use boosts the positive relationship between problem solving demands and job satisfaction, and buffers the positive relationship between problem solving demands and fatigue. Using a daily diary study design, data were collected from 64 administrative employees who completed a general questionnaire and two daily online questionnaires over four work days. Multilevel analyses showed that problem solving demands were positively related to fatigue, but unrelated to job satisfaction. SOC strategy use was positively related to job satisfaction, but unrelated to fatigue. A buffering effect of high SOC strategy use on the demands-fatigue relationship was found, but no booster effect on the demands-satisfaction relationship. The results suggest that high SOC strategy use is a resource that protects employees from the negative effects of high problem solving demands.
Resumo:
I agree with Costanza and Finkelstein (2015) that it is futile to further invest in the study of generational differences in the work context due to a lack of appropriate theory and methods. The key problem with the generations concept is that splitting continuous variables such as age or time into a few discrete units involves arbitrary cutoffs and atheoretical groupings of individuals (e.g., stating that all people born between the early 1960s and early 1980s belong to Generation X). As noted by methodologists, this procedure leads to a loss of information about individuals and reduced statistical power (MacCallum, Zhang, Preacher, & Rucker, 2002). Due to these conceptual and methodological limitations, I regard it as very difficult if not impossible to develop a “comprehensive theory of generations” (Costanza & Finkelstein, p. 20) and to rigorously examine generational differences at work in empirical studies.