938 resultados para Vapour–liquid–liquid equilibrium


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A holistic study of the composition of the basalt groundwaters of the Atherton Tablelands region in Queensland, Australia was undertaken to elucidate possible mechanisms for the evolution of these very low salinity, silica- and bicarbonate-rich groundwaters. It is proposed that aluminosilicate mineral weathering is the major contributing process to the overall composition of the basalt groundwaters. The groundwaters approach equilibrium with respect to the primary minerals with increasing pH and are mostly in equilibrium with the major secondary minerals (kaolinite and smectite), and other secondary phases such as goethite, hematite, and gibbsite, which are common accessory minerals in the Atherton basalts. The mineralogy of the basalt rocks, which has been examined using X-ray diffraction and whole rock geochemistry methods, supports the proposed model for the hydrogeochemical evolution of these groundwaters: precipitation + CO 2 (atmospheric + soil) + pyroxene + feldspars + olivine yields H 4SiO 4, HCO 3 -, Mg 2+, Na +, Ca 2+ + kaolinite and smectite clays + amorphous or crystalline silica + accessory minerals (hematite, goethite, gibbsite, carbonates, zeolites, and pyrite). The variations in the mineralogical content of these basalts also provide insights into the controls on groundwater storage and movement in this aquifer system. The fresh and weathered vesicular basalts are considered to be important in terms of zones of groundwater occurrence, while the fractures in the massive basalt are important pathways for groundwater movement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concerns regarding students' learning and reasoning in chemistry classrooms are well documented. Students' reasoning in chemistry should be characterized by conscious consideration of chemical phenomenon from laboratory work at macroscopic, molecular/sub-micro and symbolic levels. Further, students should develop metacognition in relation to such ways of reasoning about chemistry phenomena. Classroom change eliciting metacognitive experiences and metacognitive reflection is necessary to shift entrenched views of teaching and learning in students. In this study, Activity Theory is used as the framework for intepreting changes to the rules/customs and tools of the activity systems of two different classes of students taught by the same teacher, Frances, who was teaching chemical equilibrium to those classes in consecutive years. An interpretive methodolgy involving multiple data sources was employed. Frances explicitly changed her pedagogy in the second year to direct students attention to increasingly consider chemical phenomena at the molecular/sub-micro level. Additonally, she asked students not to use the textbook until toward the end of the equilibrium unit and sought to engage them in using their prior knowledge of chemistry to understand their observations from experiments. Frances' changed pedagogy elicited metacognitive experiences and reflection in students and challenged them to reconsider their metacognitive beliefs about learning chemistry and how it might be achieved. While teacher change is essential for science education reform, students are not passive players in the change efforts and they need to be convinced of the viability of teacher pedagogical change in the context of their goals, intentions, and beliefs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evidence for a two-metal ion mechanism for cleavage of the HH16 hammerhead ribozyme is provided by monitoring the rate of cleavage of the RNA substrate as a function of La3+ concentration in the presence of a constant concentration of Mg2+. We show that a bell-shaped curve of cleavage activation is obtained as La3+ is added in micromolar concentrations in the presence of 8 mM Mg2+, with a maximal rate of cleavage being attained in the presence of 3 microM La3+. These results show that two-metal ion binding sites on the ribozyme regulate the rate of the cleavage reaction and, on the basis of earlier estimates of the Kd values for Mg2+ of 3.5 mM and > 50 mM, that these sites bind La3+ with estimated Kd values of 0.9 and > 37.5 microM, respectively. Furthermore, given the very different effects of these metal ions at the two binding sites, with displacement of Mg2+ by La3+ at the stronger (relative to Mg2+) binding site activating catalysis and displacement of Mg2+ by La3+ at the weaker (relative to Mg2+) (relative to Mg2+) binding site inhibiting catalysis, we show that the metal ions at these two sites play very different roles. We argue that the metal ion at binding site 1 coordinates the attacking 2'-oxygen species in the reaction and lowers the pKa of the attached proton, thereby increasing the concentration of the attacking alkoxide nucleophile in an equilibrium process. In contrast, the role of the metal ion at binding site 2 is to catalyze the reaction by absorbing the negative charge that accumulates at the leaving 5'-oxygen in the transition state. We suggest structural reasons why the Mg(2+)-La3+ ion combination is particularly suited to demonstrating these different roles of the two-metal ions in the ribozyme cleavage reaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper investigates two advanced Computational Intelligence Systems (CIS) for a morphing Unmanned Aerial Vehicle (UAV) aerofoil/wing shape design optimisation. The first CIS uses Genetic Algorithm (GA) and the second CIS uses Hybridized GA (HGA) with the concept of Nash-Equilibrium to speed up the optimisation process. During the optimisation, Nash-Game will act as a pre-conditioner. Both CISs; GA and HGA, are based on Pareto optimality and they are coupled to Euler based Computational Fluid Dynamic (CFD) analyser and one type of Computer Aided Design (CAD) system during the optimisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article offers a critical exploration of the concept of resilience, which is largely conceptualized in the literature as an extraordinary atypical personal ability to revert or ‘bounce back’ to a point of equilibrium despite significant adversity. While resilience has been explored in a range of contexts, there is little recognition of resilience as a social process arising from mundane practices of everyday life and situated in person -environment interactions. Based on an ethnographic study among single refugee women with children in Brisbane, Australia, the women’s stories on navigating everyday tensions and opportunities revealed how resilience was a process operating inter-subjectively in the social spaces connecting them to their environment. Far beyond the simplistic binaries of resilience versus non-resilient, we concern ourselves here with the everyday processual, person environment nature of the concept. We argue that more attention should be paid to day-to-day pathways through which resilience outcomes are achieved, and that this has important implications for refugee mental health practice frameworks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effect of very small air gaps (less than 1 mm) on the dosimetry of small photon fields used for stereotactic treatments. Measurements were performed with optically stimulated luminescent dosimeters (OSLDs) for 6 MV photons on a Varian 21iX linear accelerator with a Brainlab μMLC attachment for square field sizes down to 6 mm × 6 mm. Monte Carlo simulations were performed using EGSnrc C++ user code cavity. It was found that the Monte Carlo model used in this study accurately simulated the OSLD measurements on the linear accelerator. For the 6 mm field size, the 0.5 mm air gap upstream to the active area of the OSLD caused a 5.3 % dose reduction relative to a Monte Carlo simulation with no air gap. A hypothetical 0.2 mm air gap caused a dose reduction > 2 %, emphasizing the fact that even the tiniest air gaps can cause a large reduction in measured dose. The negligible effect on an 18 mm field size illustrated that the electronic disequilibrium caused by such small air gaps only affects the dosimetry of the very small fields. When performing small field dosimetry, care must be taken to avoid any air gaps, as can be often present when inserting detectors into solid phantoms. It is recommended that very small field dosimetry is performed in liquid water. When using small photon fields, sub-millimetre air gaps can also affect patient dosimetry if they cannot be spatially resolved on a CT scan. However the effect on the patient is debatable as the dose reduction caused by a 1 mm air gap, starting out at 19% in the first 0.1 mm behind the air gap, decreases to < 5 % after just 2 mm, and electronic equilibrium is fully re-established after just 5 mm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coate and Loury (1993) suggest the impact of affirmative action on a negative stereotype is theoretically ambiguous leading to either: a benign equilibrium in which affirmative action eradicates the negative stereotype and leads to equal proportional representation of the two groups; or alternatively a patronising equilibrium in which the stereotype persists. The current paper examines this theoretical ambiguity within the context of a laboratory experiment. Although benign and patronising equilibria are equally plausible in theory, the laboratory experiments easily replicate most features of the benign equilibrium, but diverge from the theoretically predicted patronising equilibrium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This spreadsheet calculates carbonate speciation using carbonate equilibrium equations at standard conditions (T=25°C) with ionic strength corrections. The user will typically be able to calculate the different carbonate species by entering total alkalinity and pH. This spreadsheet contains additional tools to calculate the Langelier Index for calcium and the SAR of the water. Note that in this last calculation the potential for calcium precipitation is not taken into account. The last tool presented here is a carbonate speciation tool in open systems (e.g. open to the atmosphere) which takes into account atmospheric pressure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular dynamics simulations were carried out on single chain models of linear low-density polyethylene in vacuum to study the effects of branch length, branch content, and branch distribution on the polymer’s crystalline structure at 300 K. The trans/gauche (t/g) ratios of the backbones of the modeled molecules were calculated and utilized to characterize their degree of crystallinity. The results show that the t/g ratio decreases with increasing branch content regardless of branch length and branch distribution, indicating that branch content is the key molecular parameter that controls the degree of crystallinity. Although t/g ratios of the models with the same branch content vary, they are of secondary importance. However, our data suggests that branch distribution (regular or random) has a significant effect on the degree of crystallinity for models containing 10 hexyl branches/1,000 backbone carbons. The fractions of branches that resided in the equilibrium crystalline structures of the models were also calculated. On average, 9.8% and 2.5% of the branches were found in the crystallites of the molecules with ethyl and hexyl branches while C13 NMR experiments showed that the respective probabilities of branch inclusion for ethyl and hexyl branches are 10% and 6% [Hosoda et al., Polymer 1990, 31, 1999–2005]. However, the degree of branch inclusion seems to be insensitive to the branch content and branch distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a detailed analysis on the collective dynamics and delayed state feedback control of a three-dimensional delayed small-world network. The trivial equilibrium of the model is first investigated, showing that the uncontrolled model exhibits complicated unbounded behavior. Then three control strategies, namely a position feedback control, a velocity feedback control, and a hybrid control combined velocity with acceleration feedback, are then introduced to stabilize this unstable system. It is shown in these three control schemes that only the hybrid control can easily stabilize the 3-D network system. And with properly chosen delay and gain in the delayed feedback path, the hybrid controlled model may have stable equilibrium, or periodic solutions resulting from the Hopf bifurcation, or complex stranger attractor from the period-doubling bifurcation. Moreover, the direction of Hopf bifurcation and stability of the bifurcation periodic solutions are analyzed. The results are further extended to any "d" dimensional network. It shows that to stabilize a "d" dimensional delayed small-world network, at least a "d – 1" order completed differential feedback is needed. This work provides a constructive suggestion for the high dimensional delayed systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The three-component reaction-diffusion system introduced in [C. P. Schenk et al., Phys. Rev. Lett., 78 (1997), pp. 3781–3784] has become a paradigm model in pattern formation. It exhibits a rich variety of dynamics of fronts, pulses, and spots. The front and pulse interactions range in type from weak, in which the localized structures interact only through their exponentially small tails, to strong interactions, in which they annihilate or collide and in which all components are far from equilibrium in the domains between the localized structures. Intermediate to these two extremes sits the semistrong interaction regime, in which the activator component of the front is near equilibrium in the intervals between adjacent fronts but both inhibitor components are far from equilibrium there, and hence their concentration profiles drive the front evolution. In this paper, we focus on dynamically evolving N-front solutions in the semistrong regime. The primary result is use of a renormalization group method to rigorously derive the system of N coupled ODEs that governs the positions of the fronts. The operators associated with the linearization about the N-front solutions have N small eigenvalues, and the N-front solutions may be decomposed into a component in the space spanned by the associated eigenfunctions and a component projected onto the complement of this space. This decomposition is carried out iteratively at a sequence of times. The former projections yield the ODEs for the front positions, while the latter projections are associated with remainders that we show stay small in a suitable norm during each iteration of the renormalization group method. Our results also help extend the application of the renormalization group method from the weak interaction regime for which it was initially developed to the semistrong interaction regime. The second set of results that we present is a detailed analysis of this system of ODEs, providing a classification of the possible front interactions in the cases of $N=1,2,3,4$, as well as how front solutions interact with the stationary pulse solutions studied earlier in [A. Doelman, P. van Heijster, and T. J. Kaper, J. Dynam. Differential Equations, 21 (2009), pp. 73–115; P. van Heijster, A. Doelman, and T. J. Kaper, Phys. D, 237 (2008), pp. 3335–3368]. Moreover, we present some results on the general case of N-front interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much has been written on Michel Foucault’s reluctance to clearly delineate a research method, particularly with respect to genealogy (Harwood 2000; Meadmore, Hatcher, & McWilliam 2000; Tamboukou 1999). Foucault (1994, p. 288) himself disliked prescription stating, “I take care not to dictate how things should be” and wrote provocatively to disrupt equilibrium and certainty, so that “all those who speak for others or to others” no longer know what to do. It is doubtful, however, that Foucault ever intended for researchers to be stricken by that malaise to the point of being unwilling to make an intellectual commitment to methodological possibilities. Taking criticism of “Foucauldian” discourse analysis as a convenient point of departure to discuss the objectives of poststructural analyses of language, this paper develops what might be called a discursive analytic; a methodological plan to approach the analysis of discourses through the location of statements that function with constitutive effects.