115 resultados para Mathematical computing
Resumo:
Straightforward mathematical techniques are used innovatively to form a coherent theoretical system to deal with chemical equilibrium problems. For a systematic theory it is necessary to establish a system to connect different concepts. This paper shows the usefulness and consistence of the system by applications of the theorems introduced previously. Some theorems are shown somewhat unexpectedly to be mathematically correlated and relationships are obtained in a coherent manner. It has been shown that theorem 1 plays an important part in interconnecting most of the theorems. The usefulness of theorem 2 is illustrated by proving it to be consistent with theorem 3. A set of uniform mathematical expressions are associated with theorem 3. A variety of mathematical techniques based on theorems 1–3 are shown to establish the direction of equilibrium shift. The equilibrium properties expressed in initial and equilibrium conditions are shown to be connected via theorem 5. Theorem 6 is connected with theorem 4 through the mathematical representation of theorem 1.
Resumo:
Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.
Resumo:
Explanations of the marked individual differences in elementary school mathematical achievement and mathematical learning disability (MLD or dyscalculia) have involved domain-general factors (working memory, reasoning, processing speed and oral language) and numerical factors that include single-digit processing efficiency and multi-digit skills such as number system knowledge and estimation. This study of third graders (N = 258) finds both domain-general and numerical factors contribute independently to explaining variation in three significant arithmetic skills: basic calculation fluency, written multi-digit computation, and arithmetic word problems. Estimation accuracy and number system knowledge show the strongest associations with every skill and their contributions are both independent of each other and other factors. Different domain-general factors independently account for variation in each skill. Numeral comparison, a single digit processing skill, uniquely accounts for variation in basic calculation. Subsamples of children with MLD (at or below 10th percentile, n = 29) are compared with low achievement (LA, 11th to 25th percentiles, n = 42) and typical achievement (above 25th percentile, n = 187). Examination of these and subsets with persistent difficulties supports a multiple deficits view of number difficulties: most children with number difficulties exhibit deficits in both domain-general and numerical factors. The only factor deficit common to all persistent MLD children is in multi-digit skills. These findings indicate that many factors matter but multi-digit skills matter most in third grade mathematical achievement.
Resumo:
In this paper we have proposed and analyzed a simple mathematical model consisting of four variables, viz., nutrient concentration, toxin producing phytoplankton (TPP), non-toxic phytoplankton (NTP), and toxin concentration. Limitation in the concentration of the extracellular nutrient has been incorporated as an environmental stress condition for the plankton population, and the liberation of toxic chemicals has been described by a monotonic function of extracellular nutrient. The model is analyzed and simulated to reproduce the experimental findings of Graneli and Johansson [Graneli, E., Johansson, N., 2003. Increase in the production of allelopathic Prymnesium parvum cells grown under N- or P-deficient conditions. Harmful Algae 2, 135–145]. The robustness of the numerical experiments are tested by a formal parameter sensitivity analysis. As the first theoretical model consistent with the experiment of Graneli and Johansson (2003), our results demonstrate that, when nutrient-deficient conditions are favorable for the TPP population to release toxic chemicals, the TPP species control the bloom of other phytoplankton species which are non-toxic. Consistent with the observations made by Graneli and Johansson (2003), our model overcomes the limitation of not incorporating the effect of nutrient-limited toxic production in several other models developed on plankton dynamics.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
A mathematical model incorporating many of the important processes at work in the crystallization of emulsions is presented. The model describes nucleation within the discontinuous domain of an emulsion, precipitation in the continuous domain, transport of monomers between the two domains, and formation and subsequent growth of crystals in both domains. The model is formulated as an autonomous system of nonlinear, coupled ordinary differential equations. The description of nucleation and precipitation is based upon the Becker–Döring equations of classical nucleation theory. A particular feature of the model is that the number of particles of all species present is explicitly conserved; this differs from work that employs Arrhenius descriptions of nucleation rate. Since the model includes many physical effects, it is analyzed in stages so that the role of each process may be understood. When precipitation occurs in the continuous domain, the concentration of monomers falls below the equilibrium concentration at the surface of the drops of the discontinuous domain. This leads to a transport of monomers from the drops into the continuous domain that are then incorporated into crystals and nuclei. Since the formation of crystals is irreversible and their subsequent growth inevitable, crystals forming in the continuous domain effectively act as a sink for monomers “sucking” monomers from the drops. In this case, numerical calculations are presented which are consistent with experimental observations. In the case in which critical crystal formation does not occur, the stationary solution is found and a linear stability analysis is performed. Bifurcation diagrams describing the loci of stationary solutions, which may be multiple, are numerically calculated.
Resumo:
Cholesterol is one of the key constituents for maintaining the cellular membrane and thus the integrity of the cell itself. In contrast high levels of cholesterol in the blood are known to be a major risk factor in the development of cardiovascular disease. We formulate a deterministic nonlinear ordinary differential equation model of the sterol regulatory element binding protein 2 (SREBP-2) cholesterol genetic regulatory pathway in an hepatocyte. The mathematical model includes a description of genetic transcription by SREBP-2 which is subsequently translated to mRNA leading to the formation of 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGCR), a main precursor of cholesterol synthesis. Cholesterol synthesis subsequently leads to the regulation of SREBP-2 via a negative feedback formulation. Parameterised with data from the literature, the model is used to understand how SREBP-2 transcription and regulation affects cellular cholesterol concentration. Model stability analysis shows that the only positive steady-state of the system exhibits purely oscillatory, damped oscillatory or monotic behaviour under certain parameter conditions. In light of our findings we postulate how cholesterol homestasis is maintained within the cell and the advantages of our model formulation are discussed with respect to other models of genetic regulation within the literature.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
n this study, the authors discuss the effective usage of technology to solve the problem of deciding on journey start times for recurrent traffic conditions. The developed algorithm guides the vehicles to travel on more reliable routes that are not easily prone to congestion or travel delays, ensures that the start time is as late as possible to avoid the traveller waiting too long at their destination and attempts to minimise the travel time. Experiments show that in order to be more certain of reaching their destination on time, a traveller has to leave early and correspondingly arrive early, resulting in a large waiting time. The application developed here asks the user to set this certainty factor as per the task in hand, and computes the best start time and route.
Resumo:
Smart meters are becoming more ubiquitous as governments aim to reduce the risks to the energy supply as the world moves toward a low carbon economy. The data they provide could create a wealth of information to better understand customer behaviour. However at the household, and even the low voltage (LV) substation level, energy demand is extremely volatile, irregular and noisy compared to the demand at the high voltage (HV) substation level. Novel analytical methods will be required in order to optimise the use of household level data. In this paper we briefly outline some mathematical techniques which will play a key role in better understanding the customer's behaviour and create solutions for supporting the network at the LV substation level.
Resumo:
Mathematical ability is heritable, but few studies have directly investigated its molecular genetic basis. Here we aimed to identify specific genetic contributions to variation in mathematical ability. We carried out a genome wide association scan using pooled DNA in two groups of U.K. samples, based on end of secondary/high school national academic exam achievement: high (n = 419) versus low (n = 183) mathematical ability while controlling for their verbal ability. Significant differences in allele frequencies between these groups were searched for in 906,600 SNPs using the Affymetrix GeneChip Human Mapping version 6.0 array. After meeting a threshold of p<1.5×10-5, 12 SNPs from the pooled association analysis were individually genotyped in 542 of the participants and analyzed to validate the initial associations (lowest p-value 1.14 ×10-6). In this analysis, one of the SNPs (rs789859) showed significant association after Bonferroni correction, and four (rs10873824, rs4144887, rs12130910 rs2809115) were nominally significant (lowest p-value 3.278 × 10-4). Three of the SNPs of interest are located within, or near to, known genes (FAM43A, SFT2D1, C14orf64). The SNP that showed the strongest association, rs789859, is located in a region on chromosome 3q29 that has been previously linked to learning difficulties and autism. rs789859 lies 1.3 kbp downstream of LSG1, and 700 bp upstream of FAM43A, mapping within the potential promoter/regulatory region of the latter. To our knowledge, this is only the second study to investigate the association of genetic variants with mathematical ability, and it highlights a number of interesting markers for future study.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
Rhythms are manifested ubiquitously in dynamical biological processes. These fundamental processes which are necessary for the survival of living organisms include metabolism, breathing, heart beat, and, above all, the circadian rhythm coupled to the diurnal cycle. Thus, in mathematical biology, biological processes are often represented as linear or nonlinear oscillators. In the framework of nonlinear and dissipative systems (ie. the flow of energy, substances, or sensory information), they generate stable internal oscillations as a response to environmental input and, in turn, utilise such output as a means of coupling with the environment.