265 resultados para xanthone derivative
Resumo:
Fractional Fokker-Planck equations (FFPEs) have gained much interest recently for describing transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns. However, effective numerical methods and analytic techniques for the FFPE are still in their embryonic state. In this paper, we consider a class of time-space fractional Fokker-Planck equations with a nonlinear source term (TSFFPE-NST), which involve the Caputo time fractional derivative (CTFD) of order α ∈ (0, 1) and the symmetric Riesz space fractional derivative (RSFD) of order μ ∈ (1, 2). Approximating the CTFD and RSFD using the L1-algorithm and shifted Grunwald method, respectively, a computationally effective numerical method is presented to solve the TSFFPE-NST. The stability and convergence of the proposed numerical method are investigated. Finally, numerical experiments are carried out to support the theoretical claims.
Resumo:
Fractional Fokker–Planck equations have been used to model several physical situations that present anomalous diffusion. In this paper, a class of time- and space-fractional Fokker–Planck equations (TSFFPE), which involve the Riemann–Liouville time-fractional derivative of order 1-α (α(0, 1)) and the Riesz space-fractional derivative (RSFD) of order μ(1, 2), are considered. The solution of TSFFPE is important for describing the competition between subdiffusion and Lévy flights. However, effective numerical methods for solving TSFFPE are still in their infancy. We present three computationally efficient numerical methods to deal with the RSFD, and approximate the Riemann–Liouville time-fractional derivative using the Grünwald method. The TSFFPE is then transformed into a system of ordinary differential equations (ODE), which is solved by the fractional implicit trapezoidal method (FITM). Finally, numerical results are given to demonstrate the effectiveness of these methods. These techniques can also be applied to solve other types of fractional partial differential equations.
Resumo:
We consider a time and space-symmetric fractional diffusion equation (TSS-FDE) under homogeneous Dirichlet conditions and homogeneous Neumann conditions. The TSS-FDE is obtained from the standard diffusion equation by replacing the first-order time derivative by the Caputo fractional derivative and the second order space derivative by the symmetric fractional derivative. Firstly, a method of separating variables is used to express the analytical solution of the tss-fde in terms of the Mittag–Leffler function. Secondly, we propose two numerical methods to approximate the Caputo time fractional derivative, namely, the finite difference method and the Laplace transform method. The symmetric space fractional derivative is approximated using the matrix transform method. Finally, numerical results are presented to demonstrate the effectiveness of the numerical methods and to confirm the theoretical claims.
Resumo:
The detection and potential treatment of oxidative stress in biological systems has been explored using isoindoline-based nitroxide radicals. A novel tetraethyl-fluorescein nitroxide was synthesised for its use as a profluorescent probe for redox processes in biological systems. This tetraethyl system, as well as a tetramethyl-fluorescein nitroxide, were shown to be sensitive and selective probes for superoxide in vitro. The redox environment of cellular systems was also explored using the tetramethylfluorescein species based on its reduction to the hydroxylamine. Flow cytometry was employed to assess the extent of nitroxide reduction, reflecting the overall cellular redox environment. Treatment of normal fibroblasts with rotenone and 2-deoxyglucose resulted in an oxidising cellular environment as shown by the lack of reduction of the fluorescein-nitroxide system. Assessment of the tetraethyl-fluorescein nitroxide system in the same way demonstrated its enhanced resistance to reduction and offers the potential to detect and image biologically relevant reactive oxygen species directly. Importantly, these profluorescent nitroxide compounds were shown to be more effective than the more widely used and commercially available probes for reactive oxygen species such as 2’,7’-dichlorodihydrofluorescein diacetate. Fluorescence imaging of the tetramethyl-fluorescein nitroxide and a number of other rhodamine-nitroxide derivatives was undertaken, revealing the differential cellular localisation of these systems and thus their potential for the detection of redox changes in specific cellular compartments. As well as developing novel methods for the detection of oxidative stress, a number of novel isoindoline nitroxides were synthesised for their potential application as small-molecule antioxidants. These compounds incorporated known pharmacophores into the isoindoline-nitroxide structure in an attempt to increase their efficacy in biological systems. A primary and a secondary amine nitroxide were synthesised which incorporated the phenethylamine backbone of the sympathomimetic amine class of drugs. Initial assessment of the novel primary amine derivative indicated a protective effect comparable to that of 5-carboxy-1,1,3,3- tetramethylisoindolin-2-yloxyl. Methoxy-substituted nitroxides were also synthesised as potential antioxidants for their structural similarity to some amphetamine type stimulants. A copper-catalysed methodology provided access to both the mono- and di-substituted methoxy-nitroxides. Deprotection of the ethers in these compounds using boron tribromide successfully produced a phenolnitroxide, however the catechol moiety in the disubstituted derivative appeared to undergo reaction with the nitroxide to produce quinone-like degradation products. A novel fluoran-nitroxide was also synthesised from the methoxy-substituted nitroxide, providing a pH-sensitive spin probe. An amino-acid precursor containing a nitroxide moiety was also synthesised for its application as a dual-action antioxidant. N-Acetyl protection of the nitroxide radical was necessary prior to the Erlenmeyer reaction with N-acetyl glycine. Hydrolysis and reduction of the azlactone intermediate produced a novel amino acid precursor with significant potential as an effective antioxidant.
Resumo:
The Australian report for the Global Media Monitoring Project 2010 (GMMP 2010) involved a study of 374 stories that were sampled from 26 Australian newspapers, radio and television stations, and internet news services on 10 November 2009. This snapshot of reporting on that day suggests that women are under-represented in the Australian news media as both the sources and creators of news. Females made up only 24% of the 1012 news sources who were heard, read about or seen in the stories that were studied. Neglect of female sources was particularly noticeable in sports news. Women made up only 1% of the 142 sources who were talked about or quoted in sports stories. Female sources of news were disproportionately portrayed as celebrities and victims. Although women made up only 24% of sources overall, they comprised 44% of victims of crimes, accidents, war, health problems, or discrimination. Unsurprisingly, women made up 32% of sources in stories about violent crimes and 29% in stories about disasters, accidents or emergencies – usually in the role of victim. Females were commonly defined in terms of their status as a mother, daughter, wife, sister or other family relationship. Family status was mentioned for 33% of women quoted or discussed in the news stories compared to only 13% of male sources. Women also made up 75% of sources described as homemakers or parents. The Australian GMMP 2010 study also indicates a gender division among the journalists who wrote or presented the news. Only 32% of the stories were written or presented by female reporters and newsreaders. The gender inequality was again most evident in sports journalism. Findings from the Australian report also contributed to the GMMP 2010 Global Report and the Pacific GMMP 2010 Regional Report, which are available at http://whomakesthenews.org/gmmp/gmmp-reports/gmmp-2010-reports
Resumo:
This report analyses the national curriculum and workforce needs of the social work and human services workforce. Australia’s community and health services are among the fastest growing sectors of employment in the nation but the sustainability of an appropriately qualified workforce is threatened. Yet there is little integration of education and workforce planning for the community services sector. This contrasts markedly with the health services sector, where key stakeholders are collaboratively addressing workforce challenges. Our research confirmed rapid growth in the social work and human services workforce and it also identified: • an undersupply of professionally qualified social work and human service practitioners to meet workforce demand; • the rapid ageing of the workforce with many workers approaching retirement; • limited career and salary structures creating disincentives to retention; • a highly diverse qualification base across the workforce. This diversity is inconsistent with the specialist knowledge and skills required of practitioners in many domains of community service provision. Our study revealed a lack of co-ordination across VET and higher education to meet the educational needs of the social work and human services workforce. Our analysis identified: • strong representation of equity groups in social work and related human service programs, although further participation of these groups is still needed; • the absence of clear articulation pathways between VET and higher education programs due the absence of co-ordination and planning between these sectors; • substantial variation in the content of the diverse range of social work and human service programs, with accredited programs conforming to national standards and some others in social and behavioural sciences lacking any external validation; • financial obstacles and disincentives to social work and human service practitioners in achieving postgraduate level qualifications. We recommend that: • DEEWR identify accredited social work and human services courses as a national education priority (similar to education and nursing). This will help ensure the supply of professional workers to this sector; • VET and higher education providers are encouraged to collaboratively develop clear and accessible educational pathways across the educational sectors; • DEEWR undertake a national workforce analysis and planning processes in collaboration with CSDMAC, and all social and community services stakeholders, to ensure workforce sustainability; and • COAG develop a national regulation framework for the social and community services workforce. This would provide sound accountability systems, and rigorous practice and educational standards necessary for quality service provision. It will also ensure much needed public confidence in this workforce.
Resumo:
Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.
Resumo:
Homologous recombinational repair is an essential mechanism for repair of double-strand breaks in DNA. Recombinases of the RecA-fold family play a crucial role in this process, forming filaments that utilize ATP to mediate their interactions with singleand double-stranded DNA. The recombinase molecules present in the archaea (RadA) and eukaryota (Rad51) are more closely related to each other than to their bacterial counterpart (RecA) and, as a result, RadA makes a suitable model for the eukaryotic system. The crystal structure of Sulfolobus solfataricus RadA has been solved to a resolution of 3.2 A° in the absence of nucleotide analogues or DNA, revealing a narrow filamentous assembly with three molecules per helical turn. As observed in other RecA-family recombinases, each RadA molecule in the filament is linked to its neighbour via interactions of a short b-strand with the neighbouring ATPase domain. However, despite apparent flexibility between domains, comparison with other structures indicates conservation of a number of key interactions that introduce rigidity to the system, allowing allosteric control of the filament by interaction with ATP. Additional analysis reveals that the interaction specificity of the five human Rad51 paralogues can be predicted using a simple model based on the RadA structure.
Resumo:
The title compound, C18H12N6O6 was prepared from the reaction of 4-(phenyldiazenyl)aniline (aniline yellow) with picrylsulfonic acid. The dihedral angle formed by the two benzene rings of the diphenyldiazenyl ring system 6.55(13)deg. and that formed by the rings of the picrate-aniline ring system is 48.76(12)deg. The molecule contains an intramolecular aniline-nitro N-H...O hydrogen bond.
Resumo:
Building insulation is often used to reduce the conduction heat transfer through building envelope. With a higher level of insulation (or a greater R-value), the less the conduction heat would transfer through building envelope. In this paper, using building computer simulation techniques, the effects of building insulation levels on the thermal and energy performance of a sample air-conditioned office building in Australia are studied. It is found that depending on the types of buildings and the climates of buildings located, increasing the level of building insulation will not always bring benefits in energy saving and thermal comfort, particularly for internal-load dominated office buildings located in temperate/tropical climates. The possible implication of building insulation in face of global warming has also been examined. Compared with the influence of insulation on building thermal performance, the influence on building energy use is relatively small.
Resumo:
Sustainable practices are more than ever on the radar screen of organizations, triggered by a growing demand of the wider population towards approaches and practices that can be considered "green" or "sustainable". Our specific intent with this call for action is to immerse deeper into the role of business processes, and specifically the contributions that the management of these processes can play in leveraging the transformative power of information systems (IS) in order to create environmentally sustainable organizations. Our key premise is that business and information technology (IT) managers need to engage in a process-focused discussion to enable a common, comprehensive understanding of process, and the process-centered opportunities for making these processes, and ultimately the organization as a process-centric entity, "green". Based on a business process lifecycle model, we propose possible avenues for future research.
Resumo:
One of the prominent topics in Business Service Management is business models for (new) services. Business models are useful for service management and engineering as they provide a broader and more holistic perspective on services. Business models are particularly relevant for service innovation as this requires paying attention to the business models that make new services viable and business model innovation can drive the innovation of new and established services. Before we can have a look at business models for services, we first need to understand what business models are. This is not straight-forward as business models are still not well comprehended and the knowledge about business models is fragmented over different disciplines, such as information systems, strategy, innovation, and entrepreneurship. This whitepaper, ‘Understanding business models,’ introduces readers to business models. This whitepaper contributes to enhancing the understanding of business models, in particular the conceptualisation of business models by discussing and integrating business model definitions, frameworks and archetypes from different disciplines. After reading this whitepaper, the reader will have a well-developed understanding about what business models are and how the concept is sometimes interpreted and used in different ways. It will help the reader in assessing their own understanding of business models and that and of others. This will contribute to a better and more beneficial use of business models, an increase in shared understanding, and making it easier to work with business model techniques and tools.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Human hair fibres are ubiquitous in nature and are found frequently at crime scenes often as a result of exchange between the perpetrator, victim and/or the surroundings according to Locard's Principle. Therefore, hair fibre evidence can provide important information for crime investigation. For human hair evidence, the current forensic methods of analysis rely on comparisons of either hair morphology by microscopic examination or nuclear and mitochondrial DNA analyses. Unfortunately in some instances the utilisation of microscopy and DNA analyses are difficult and often not feasible. This dissertation is arguably the first comprehensive investigation aimed to compare, classify and identify the single human scalp hair fibres with the aid of FTIR-ATR spectroscopy in a forensic context. Spectra were collected from the hair of 66 subjects of Asian, Caucasian and African (i.e. African-type). The fibres ranged from untreated to variously mildly and heavily cosmetically treated hairs. The collected spectra reflected the physical and chemical nature of a hair from the near-surface particularly, the cuticle layer. In total, 550 spectra were acquired and processed to construct a relatively large database. To assist with the interpretation of the complex spectra from various types of human hair, Derivative Spectroscopy and Chemometric methods such as Principal Component Analysis (PCA), Fuzzy Clustering (FC) and Multi-Criteria Decision Making (MCDM) program; Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Geometrical Analysis for Interactive Aid (GAIA); were utilised. FTIR-ATR spectroscopy had two important advantages over to previous methods: (i) sample throughput and spectral collection were significantly improved (no physical flattening or microscope manipulations), and (ii) given the recent advances in FTIR-ATR instrument portability, there is real potential to transfer this work.s findings seamlessly to on-field applications. The "raw" spectra, spectral subtractions and second derivative spectra were compared to demonstrate the subtle differences in human hair. SEM images were used as corroborative evidence to demonstrate the surface topography of hair. It indicated that the condition of the cuticle surface could be of three types: untreated, mildly treated and treated hair. Extensive studies of potential spectral band regions responsible for matching and discrimination of various types of hair samples suggested the 1690-1500 cm-1 IR spectral region was to be preferred in comparison with the commonly used 1750-800 cm-1. The principal reason was the presence of the highly variable spectral profiles of cystine oxidation products (1200-1000 cm-1), which contributed significantly to spectral scatter and hence, poor hair sample matching. In the preferred 1690-1500 cm-1 region, conformational changes in the keratin protein attributed to the α-helical to β-sheet transitions in the Amide I and Amide II vibrations and played a significant role in matching and discrimination of the spectra and hence, the hair fibre samples. For gender comparison, the Amide II band is significant for differentiation. The results illustrated that the male hair spectra exhibit a more intense β-sheet vibration in the Amide II band at approximately 1511 cm-1 whilst the female hair spectra displayed more intense α-helical vibration at 1520-1515cm-1. In terms of chemical composition, female hair spectra exhibit greater intensity of the amino acid tryptophan (1554 cm-1), aspartic and glutamic acid (1577 cm-1). It was also observed that for the separation of samples based on racial differences, untreated Caucasian hair was discriminated from Asian hair as a result of having higher levels of the amino acid cystine and cysteic acid. However, when mildly or chemically treated, Asian and Caucasian hair fibres are similar, whereas African-type hair fibres are different. In terms of the investigation's novel contribution to the field of forensic science, it has allowed for the development of a novel, multifaceted, methodical protocol where previously none had existed. The protocol is a systematic method to rapidly investigate unknown or questioned single human hair FTIR-ATR spectra from different genders and racial origin, including fibres of different cosmetic treatments. Unknown or questioned spectra are first separated on the basis of chemical treatment i.e. untreated, mildly treated or chemically treated, genders, and racial origin i.e. Asian, Caucasian and African-type. The methodology has the potential to complement the current forensic analysis methods of fibre evidence (i.e. Microscopy and DNA), providing information on the morphological, genetic and structural levels.
Resumo:
In this paper, we consider the variable-order Galilei advection diffusion equation with a nonlinear source term. A numerical scheme with first order temporal accuracy and second order spatial accuracy is developed to simulate the equation. The stability and convergence of the numerical scheme are analyzed. Besides, another numerical scheme for improving temporal accuracy is also developed. Finally, some numerical examples are given and the results demonstrate the effectiveness of theoretical analysis. Keywords: The variable-order Galilei invariant advection diffusion equation with a nonlinear source term; The variable-order Riemann–Liouville fractional partial derivative; Stability; Convergence; Numerical scheme improving temporal accuracy