563 resultados para Iterative methods (Mathematics)
Resumo:
We develop a fast Poisson preconditioner for the efficient numerical solution of a class of two-sided nonlinear space fractional diffusion equations in one and two dimensions using the method of lines. Using the shifted Gr¨unwald finite difference formulas to approximate the two-sided(i.e. the left and right Riemann-Liouville) fractional derivatives, the resulting semi-discrete nonlinear systems have dense Jacobian matrices owing to the non-local property of fractional derivatives. We employ a modern initial value problem solver utilising backward differentiation formulas and Jacobian-free Newton-Krylov methods to solve these systems. For efficient performance of the Jacobianfree Newton-Krylov method it is essential to apply an effective preconditioner to accelerate the convergence of the linear iterative solver. The key contribution of our work is to generalise the fast Poisson preconditioner, widely used for integer-order diffusion equations, so that it applies to the two-sided space fractional diffusion equation. A number of numerical experiments are presented to demonstrate the effectiveness of the preconditioner and the overall solution strategy.
Resumo:
The method of lines is a standard method for advancing the solution of partial differential equations (PDEs) in time. In one sense, the method applies equally well to space-fractional PDEs as it does to integer-order PDEs. However, there is a significant challenge when solving space-fractional PDEs in this way, owing to the non-local nature of the fractional derivatives. Each equation in the resulting semi-discrete system involves contributions from every spatial node in the domain. This has important consequences for the efficiency of the numerical solver, especially when the system is large. First, the Jacobian matrix of the system is dense, and hence methods that avoid the need to form and factorise this matrix are preferred. Second, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. In this paper, we show how an effective preconditioner is essential for improving the efficiency of the method of lines for solving a quite general two-sided, nonlinear space-fractional diffusion equation. A key contribution is to show, how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
Compression ignition (CI) engine design is subject to many constraints which presents a multi-criteria optimisation problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient, but must also deliver low gaseous, particulate and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming are minimised. Consequently, this study undertakes a multi-criteria analysis which seeks to identify alternative fuels, injection technologies and combustion strategies that could potentially satisfy these CI engine design constraints. Three datasets are analysed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of 1): an ethanol fumigation system, 2): alternative fuels (20 % biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and 3): various biodiesel fuels made from 3 feedstocks (i.e. soy, tallow, and canola) tested at several blend percentages (20-100 %) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20 % by energy) at moderate load, high percentage soy blends (60-100 %), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most “preferred” solutions to this multi-criteria engine design problem. Further research is, however, required to reduce Reactive Oxygen Species (ROS) emissions with alternative fuels, and to deliver technologies that do not significantly reduce the median diameter of particle emissions.
Resumo:
This paper describes an approach to investigate the adoption of Web 2.0 in the classroom using a mixed methods study. By using a combination of qualitative or quantitative data collection and analysis techniques, we attempt to synergize the results and provide a more valid understanding of Web 2.0 adoption for learning by both teachers and students. This approach is expected to yield a better holistic view on the adoption issues associated with the e-learning 2.0 concept in current higher education as opposed to single method studies done previously. This paper also presents some early findings of e-learning 2.0 adoption using this research method
Resumo:
Purpose. The purpose of this article was to present methods capable of estimating the size and shape of the human eye lens without resorting to phakometry or magnetic resonance imaging (MRI). Methods. Previously published biometry and phakometry data of 66 emmetropic eyes of 66 subjects (age range [18, 63] years, spherical equivalent range [−0.75, +0.75] D) were used to define multiple linear regressions for the radii of curvature and thickness of the lens, from which the lens refractive index could be derived. MRI biometry was also available for a subset of 30 subjects, from which regressions could be determined for the vertex radii of curvature, conic constants, equatorial diameter, volume, and surface area. All regressions were compared with the phakometry and MRI data; the radii of curvature regressions were also compared with a method proposed by Bennett and Royston et al. Results. The regressions were in good agreement with the original measurements. This was especially the case for the regressions of lens thickness, volume, and surface area, which each had an R2 > 0.6. The regression for the posterior radius of curvature had an R2 < 0.2, making this regression unreliable. For all other regressions we found 0.25 < R2 < 0.6. The Bennett-Royston method also produced a good estimation of the radii of curvature, provided its parameters were adjusted appropriately. Conclusions. The regressions presented in this article offer a valuable alternative in case no measured lens biometry values are available; however care must be taken for possible outliers.
Resumo:
Metrics such as passengers per square metre have been developed to define optimum or crowded rail passenger density. Whilst such metrics are important to operational procedures, service evaluation and reporting, they fail to fully capture and convey the ways in which passengers experience crowded situations. This paper reports findings from a two year study of rail passenger crowding in five Australian capital cities which involved a novel mixed-methodology including ethnography, focus groups and an online stated preference choice experiment. The resulting data address the following four fundamental research questions: 1) to what extent are Australian rail passengers concerned by crowding, 2) what conditions exacerbate feelings of crowdedness, 3) what conditions mitigate feelings of crowdedness, and 4) how can we usefully understand passengers’ experiences of crowdedness? It concludes with some observations on the significance and implications of these findings for customer service provision. The findings outlined in this paper demonstrate that the experience of crowdedness (including its tolerance) cannot be understood in isolation from other customer services issues such as interior design, quality of environment, safety and public health concerns. It is hypothesised that tolerance of crowding will increase alongside improvements to overall customer service. This was the first comprehensive study of crowding in the Australian rail industry.
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
This study uses borehole geophysical log data of sonic velocity and electrical resistivity to estimate permeability in sandstones in the northern Galilee Basin, Queensland. The prior estimates of permeability are calculated according to the deterministic log–log linear empirical correlations between electrical resistivity and measured permeability. Both negative and positive relationships are influenced by the clay content. The prior estimates of permeability are updated in a Bayesian framework for three boreholes using both the cokriging (CK) method and a normal linear regression (NLR) approach to infer the likelihood function. The results show that the mean permeability estimated from the CK-based Bayesian method is in better agreement with the measured permeability when a fairly apparent linear relationship exists between the logarithm of permeability and sonic velocity. In contrast, the NLR-based Bayesian approach gives better estimates of permeability for boreholes where no linear relationship exists between logarithm permeability and sonic velocity.
Resumo:
An increasing body of research is highlighting the involvement of illicit drugs in many road fatalities. Deterrence theory has been a core conceptual framework underpinning traffic enforcement as well as interventions designed to reduce road fatalities. Essentially the effectiveness of deterrence-based approaches is predicated on perceptions of certainty, severity, and swiftness of apprehension. However, much less is known about how the awareness of legal sanctions can impact upon the effectiveness of deterrence mechanisms and whether promoting such detection methods can increase the deterrent effect. Nevertheless, the implicit assumption is that individuals aware of the legal sanctions will be more deterred. This study seeks to explore how awareness of the testing method impacts upon the effectiveness of deterrence-based interventions and intentions to drug drive again in the future. In total, 161 participants who reported drug driving in the previous six months took part in the current study. The results show that awareness of testing had a small effect upon increasing perceptions of the certainty of apprehension and severity of punishment. However, awareness was not a significant predictor of intentions to drug drive again in the future. Importantly, higher levels of drug use were a significant predictor of intentions to drug drive in the future. Whilst awareness does have a small effect on deterrence variables, the influence of levels of drug use seems to reduce any deterrent effect.
Resumo:
Qualitative research methods are widely accepted in Information Systems and multiple approaches have been successfully used in IS qualitative studies over the years. These approaches include narrative analysis, discourse analysis, grounded theory, case study, ethnography and phenomenological analysis. Guided by critical, interpretive and positivist epistemologies (Myers 1997), qualitative methods are continuously growing in importance in our research community. In this special issue, we adopt Van Maanen's (1979: 520) definition of qualitative research as an umbrella term to cover an “array of interpretive techniques that can describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world”. In the call for papers, we stated that the aim of the special issue was to provide a forum within which we can present and debate the significant number of issues, results and questions arising from the pluralistic approach to qualitative research in Information Systems. We recognise both the potential and the challenges that qualitative approaches offers for accessing the different layers and dimensions of a complex and constructed social reality (Orlikowski, 1993). The special issue is also a response to the need to showcase the current state of the art in IS qualitative research and highlight advances and issues encountered in the process of continuous learning that includes questions about its ontology, epistemological tenets, theoretical contributions and practical applications.
Resumo:
Debate about the relationships between business planning and performance has been active for decades (Bhidé, 2000; Mintzberg, 1994). While results have been inconclusive, this topic still strongly divides the research community (Brinckmann et al., 2010; Chwolka & Raith, 2011; Delmar & Shane, 2004; Frese, 2009; Gruber, 2007; Honig & Karlsson, 2004). Previous research explored the relationships between innovation and the venture creation process (Amason et al., 2006, Dewar & Dutton, 1986; Jennings et al., 2009). However, the relationships between business planning and innovation have mostly been invoked indirectly in the strategy and entrepreneurship literatures through the notion of uncertainty surrounding the development of innovation. Some posited that planning may be irrelevant due to the iterative process, the numerous changes innovation development entails and the need to be flexible (Brews & Hunt, 1999). Others suggested that planning may facilitate the achievement of goals and overcoming of obstacles (Locke and Latham, 2000), guide the venture in its allocation of resources (Delmar and Shane, 2003) and help to foster the communication about the innovation being developed (Liao & Welsh, 2008). However, the nature and extents of the relationships between business planning, innovation and performance are still largely unknown. Moreover, if the reasons why ventures should engage (Frese, 2009) –or not- (Honig, 2004) in business planning have been investigated quite extensively (Brinckmann et al., 2010), the specific value of business planning for nascent firms developing innovation is still unclear. The objective of this paper is to shed some light on these important aspects by investigating the two following questions on a large sample of random nascent firms: 1) how is business planning use over time by new ventures developing different types and degrees of innovation? 2) how do business planning and innovation impact the performance of the nascent firms? Methods & Key propositions This PSED-type study draws its data from the first three waves of the CAUSEE project where 30,105 Australian households were randomly contacted by phone using a methodology to capture emerging firms (Davidsson, Steffens, Gordon, Reynolds, 2008). This screening led to the identification of 594 nascent ventures (i.e., firms that were not operating yet at the time of the identification) that were willing to participate in the study. Comprehensive phone interviews were conducted with these 594 ventures. Likewise, two comprehensive follow-ups were organised 12 months and 24 months later where 80% of the eligible cases of the previous wave completed the interview. The questionnaire contains specific sections investigating business plans such as: presence or absence, degree of formality and updates of the plan. Four types of innovation are measured along three degrees of intensity to produce a comprehensive continuous measure ranging from 0 to 12 (Dahlqvist & Wiklund, 2011). Other sections informing on the gestation activities, industry and different types of experiences will be used as controls to measure the relationships and the impacts of business planning and innovation on the performance of nascent firms overtime. Results from two rounds of pre-testing informed the design of the instrument included in the main survey. The three waves of data are used to first test and compare the use of planning amongst nascent firms by their degrees of innovation and then to examine their impact on performance overtime through regression analyses. Results and Implications Three waves of data collection have been completed. Preliminary results show that on average, innovative firms are more likely to have a business plans than their low innovative counterpart. They are also most likely to update their plan suggesting a more continuous use of the plan over time than previously thought. Further analyses regarding the relationships between business planning, innovation and performance are undergoing. This paper is expected to contribute to the literature on business planning and innovation by measuring quantitatively their impact on nascent firms activities and performance at different stages of their development. In addition, this study will shed a new light on the business planning-performance relationship by disentangling plans, types of nascent firms regarding their innovation degres and their performance over time. Finally, we expect to increase the understanding of the venture creation process by analysing those questions on nascent firms from a large longitudinal sample of randomly selected ventures. We acknowledge the results from this study will be preliminary and will have to be interpreted with caution as the business planning-performance is not a straightforward relationship (Brinckmann et al., 2010). Meanwhile, we believe that this study is important to the field of entrepreneurship as it provides some much needed insights on the processes used by nascent firms during their creation and early operating stages.
Resumo:
Curriculum documents for mathematics emphasise the importance of promoting depth of knowledge rather than shallow coverage of the curriculum. In this paper, we report on a study that explored the analysis of junior secondary mathematics textbooks to assess their potential to assist in teaching and learning aimed at building and applying deep mathematical knowledge. The method of analysis involved the establishment of a set of specific curriculum goals and associated indicators, based on research into the teaching and learning of a particular field within the mathematics curriculum, namely proportion and proportional reasoning. Topic selection was due to its pervasive nature throughout the school mathematics curriculum at this level. As a result of this study, it was found that the five textbook series examined provided limited support for the development of multiplicative structures required for proportional reasoning, and hence would not serve well the development of deep learning of mathematics. The study demonstrated a method that could be applied to the analysis of junior secondary mathematics in many parts of the world.