978 resultados para Barrier function


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The perturbation treatment previously given is extended to explain the process of hydrogen abstraction from the various hydrogen donor molecules by the triplet nπ* state of ketones or the ground state of the alkyl or alkoxy radical. The results suggest that, as the ionization energy of the donor bonds is decreased, the reaction is accelerated and it is not influenced by the bond strength of the donor bonds. The activation barrier in such reactions arises from a weakening of the charge resonance term as the ionization energy of the donor bond increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The variable temperature proton and ambient temperature carbon-13 NMR spectra of S-methyl dithiocarbamate esters have been recorded. The results of the theoretical energy calculations (CNDO/2 and EHT types) together with the experimental data have been interpreted in terms of the molecular conformations. The barrier heights for the rotation about the thioamide C—N bond are calculated using the CNDO/2 method and the results are discussed in terms of the computed charge densities and bond orders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transfer from aluminum to copper metallization and decreasing feature size of integrated circuit devices generated a need for new diffusion barrier process. Copper metallization comprised entirely new process flow with new materials such as low-k insulators and etch stoppers, which made the diffusion barrier integration demanding. Atomic Layer Deposition technique was seen as one of the most promising techniques to deposit copper diffusion barrier for future devices. Atomic Layer Deposition technique was utilized to deposit titanium nitride, tungsten nitride, and tungsten nitride carbide diffusion barriers. Titanium nitride was deposited with a conventional process, and also with new in situ reduction process where titanium metal was used as a reducing agent. Tungsten nitride was deposited with a well-known process from tungsten hexafluoride and ammonia, but tungsten nitride carbide as a new material required a new process chemistry. In addition to material properties, the process integration for the copper metallization was studied making compatibility experiments on different surface materials. Based on these studies, titanium nitride and tungsten nitride processes were found to be incompatible with copper metal. However, tungsten nitride carbide film was compatible with copper and exhibited the most promising properties to be integrated for the copper metallization scheme. The process scale-up on 300 mm wafer comprised extensive film uniformity studies, which improved understanding of non-uniformity sources of the ALD growth and the process-specific requirements for the ALD reactor design. Based on these studies, it was discovered that the TiN process from titanium tetrachloride and ammonia required the reactor design of perpendicular flow for successful scale-up. The copper metallization scheme also includes process steps of the copper oxide reduction prior to the barrier deposition and the copper seed deposition prior to the copper metal deposition. Easy and simple copper oxide reduction process was developed, where the substrate was exposed gaseous reducing agent under vacuum and at elevated temperature. Because the reduction was observed efficient enough to reduce thick copper oxide film, the process was considered also as an alternative method to make the copper seed film via copper oxide reduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this research is to understand the function of allelic variation of genes underpinning the stay-green drought adaptation trait in sorghum in order to enhance yield in water-limited environments. Stay-green, a delayed leaf senescence phenotype in sorghum, is primarily an emergent consequence of the improved balance between the supply and demand of water. Positional and functional fine-mapping of candidate genes associated with stay-green in sorghum is the focus of an international research partnership between Australian (UQ/DAFFQ) and US (Texas A&M University) scientists. Stay-green was initially mapped to four chromosomal regions (Stg1, Stg2, Stg3, and Stg4) by a number of research groups in the US and Australia. Physiological dissection of near-isolines containing single introgressions of Stg QTL (Stg1-4) indicate that these QTL reduce water demand before flowering by constricting the size of the canopy, thereby increasing water availability during grain filling and, ultimately, grain yield. Stg and root angle QTL are also co-located and, together with crop water use data, suggest the role of roots in the stay-green phenomenon. Candidate genes have been identified in Stg1-4, including genes from the PIN family of auxin efflux carriers in Stg1 and Stg2, with 10 of 11 PIN genes in sorghum co-locating with Stg QTL. Modified gene expression in some of these PIN candidates in the stay-green compared with the senescent types has been found in preliminary RNA expression profiling studies. Further proof-of-function studies are underway, including comparative genomics, SNP analysis to assess diversity at candidate genes, reverse genetics and transformation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A composition operator is a linear operator between spaces of analytic or harmonic functions on the unit disk, which precomposes a function with a fixed self-map of the disk. A fundamental problem is to relate properties of a composition operator to the function-theoretic properties of the self-map. During the recent decades these operators have been very actively studied in connection with various function spaces. The study of composition operators lies in the intersection of two central fields of mathematical analysis; function theory and operator theory. This thesis consists of four research articles and an overview. In the first three articles the weak compactness of composition operators is studied on certain vector-valued function spaces. A vector-valued function takes its values in some complex Banach space. In the first and third article sufficient conditions are given for a composition operator to be weakly compact on different versions of vector-valued BMOA spaces. In the second article characterizations are given for the weak compactness of a composition operator on harmonic Hardy spaces and spaces of Cauchy transforms, provided the functions take values in a reflexive Banach space. Composition operators are also considered on certain weak versions of the above function spaces. In addition, the relationship of different vector-valued function spaces is analyzed. In the fourth article weighted composition operators are studied on the scalar-valued BMOA space and its subspace VMOA. A weighted composition operator is obtained by first applying a composition operator and then a pointwise multiplier. A complete characterization is given for the boundedness and compactness of a weighted composition operator on BMOA and VMOA. Moreover, the essential norm of a weighted composition operator on VMOA is estimated. These results generalize many previously known results about composition operators and pointwise multipliers on these spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rare opportunity to test hypotheses about potential fishery benefits of large-scale closures was initiated in July 2004 when an additional 28.4% of the 348 000 km2 Great Barrier Reef (GBR) region of Queensland, Australia was closed to all fishing. Advice to the Australian and Queensland governments that supported this initiative predicted these additional closures would generate minimal (10%) initial reductions in both catch and landed value within the GBR area, with recovery of catches becoming apparent after three years. To test these predictions, commercial fisheries data from the GBR area and from the two adjacent (non-GBR) areas of Queensland were compared for the periods immediately before and after the closures were implemented. The observed means for total annual catch and value within the GBR declined from pre-closure (2000–2003) levels of 12 780 Mg and Australian $160 million, to initial post-closure (2005–2008) levels of 8143 Mg and $102 million; decreases of 35% and 36% respectively. Because the reference areas in the non-GBR had minimal changes in catch and value, the beyond-BACI (before, after, control, impact) analyses estimated initial net reductions within the GBR of 35% for both total catch and value. There was no evidence of recovery in total catch levels or any comparative improvement in catch rates within the GBR nine years after implementation. These results are not consistent with the advice to governments that the closures would have minimal initial impacts and rapidly generate benefits to fisheries in the GBR through increased juvenile recruitment and adult spillovers. Instead, the absence of evidence of recovery in catches to date currently supports an alternative hypothesis that where there is already effective fisheries management, the closing of areas to all fishing will generate reductions in overall catches similar to the percentage of the fished area that is closed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From 2012-2014 the Queensland Government delivered an extension project to help sugarcane growers adopt best management practices to reduce pollutant loss to the Great Barrier Reef. Coutts J&R were engaged to measure progress towards the project's engagement, capacity gain and practice change targets. The monitoring and evaluation program comprised a database, post-workshop evaluations and grower and advisor surveys. Coutts J&R conducted an independent phone survey with 97 growers, a subset of the 900 growers engaged in extension activities. Of those surveyed 64% stated they had made practice changes. There was higher (74%) adoption by growers engaged in one-on-one extension than those growers only involved in group-based activities (36%). Overall, the project reported 41% (+/-10%, 95% confidence) of growers engaged made a practice change. The structured monitoring and evaluation program, including independent surveys, was essential to quantify practice change and demonstrate the effectiveness of extension in contributing to water quality improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fisheries management agencies around the world collect age data for the purpose of assessing the status of natural resources in their jurisdiction. Estimates of mortality rates represent a key information to assess the sustainability of fish stocks exploitation. Contrary to medical research or manufacturing where survival analysis is routinely applied to estimate failure rates, survival analysis has seldom been applied in fisheries stock assessment despite similar purposes between these fields of applied statistics. In this paper, we developed hazard functions to model the dynamic of an exploited fish population. These functions were used to estimate all parameters necessary for stock assessment (including natural and fishing mortality rates as well as gear selectivity) by maximum likelihood using age data from a sample of catch. This novel application of survival analysis to fisheries stock assessment was tested by Monte Carlo simulations to assert that it provided unbiased estimations of relevant quantities. The method was applied to the data from the Queensland (Australia) sea mullet (Mugil cephalus) commercial fishery collected between 2007 and 2014. It provided, for the first time, an estimate of natural mortality affecting this stock: 0.22±0.08 year −1 .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An iterative method of constructing sections of the game surfaces from the players'' extremal trajectory maps is discussed. Barrier sections are presented for aircraft pursuit-evasion at constant altitude, with one aircraft flying at sustained speed and the other varying its speed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction Schizophrenia is a severe mental disorder with multiple psychopathological domains being affected. Several lines of evidence indicate that cognitive impairment serves as the key component of schizophrenia psychopathology. Although there have been a multitude of cognitive studies in schizophrenia, there are many conflicting results. We reasoned that this could be due to individual differences among the patients (i.e. variation in the severity of positive vs. negative symptoms), different task designs, and/or the administration of different antipsychotics. Methods We thus review existing data concentrating on these dimensions, specifically in relation to dopamine function. We focus on most commonly used cognitive domains: learning, working memory, and attention. Results We found that the type of cognitive domain under investigation, medication state and type, and severity of positive and negative symptoms can explain the conflicting results in the literature. Conclusions This review points to future studies investigating individual differences among schizophrenia patients in order to reveal the exact relationship between cognitive function, clinical features, and antipsychotic treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As accountants, we are all familiar with the SUM function, which calculates the sum in a range of numbers. However, there are instances where we might want to sum numbers in a given range based on a specified criteria. In this instance the SUM IF function can achieve this objective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent trend towards minimizing the interconnections in large scale integration (LSI) circuits has led to intensive investigation in the development of ternary circuits and the improvement of their design. The ternary multiplexer is a convenient and useful logic module which can be used as a basic building block in the design of a ternary system. This paper discusses a systematic procedure for the simplification and realization of ternary functions using ternary multiplexers as building blocks. Both single level and multilevel multiplexing techniques are considered. The importance of the design procedure is highlighted by considering two specific applications, namely, the development of ternary adder/subtractor and TCD to ternary converter.