928 resultados para Lower Semicontinuous Function
Resumo:
Fisheries management agencies around the world collect age data for the purpose of assessing the status of natural resources in their jurisdiction. Estimates of mortality rates represent a key information to assess the sustainability of fish stocks exploitation. Contrary to medical research or manufacturing where survival analysis is routinely applied to estimate failure rates, survival analysis has seldom been applied in fisheries stock assessment despite similar purposes between these fields of applied statistics. In this paper, we developed hazard functions to model the dynamic of an exploited fish population. These functions were used to estimate all parameters necessary for stock assessment (including natural and fishing mortality rates as well as gear selectivity) by maximum likelihood using age data from a sample of catch. This novel application of survival analysis to fisheries stock assessment was tested by Monte Carlo simulations to assert that it provided unbiased estimations of relevant quantities. The method was applied to the data from the Queensland (Australia) sea mullet (Mugil cephalus) commercial fishery collected between 2007 and 2014. It provided, for the first time, an estimate of natural mortality affecting this stock: 0.22±0.08 year −1 .
Resumo:
Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.
Resumo:
We propose a new type of high-order elements that incorporates the mesh-free Galerkin formulations into the framework of finite element method. Traditional polynomial interpolation is replaced by mesh-free interpolations in the present high-order elements, and the strain smoothing technique is used for integration of the governing equations based on smoothing cells. The properties of high-order elements, which are influenced by the basis function of mesh-free interpolations and boundary nodes, are discussed through numerical examples. It can be found that the basis function has significant influence on the computational accuracy and upper-lower bounds of energy norm, when the strain smoothing technique retains the softening phenomenon. This new type of high-order elements shows good performance when quadratic basis functions are used in the mesh-free interpolations and present elements prove advantageous in adaptive mesh and nodes refinement schemes. Furthermore, it shows less sensitive to the quality of element because it uses the mesh-free interpolations and obeys the Weakened Weak (W2) formulation as introduced in [3, 5].
Resumo:
Ammonia volatilised and re-deposited to the landscape is an indirect N2O emission source. This study established a relationship between N2O emissions, low magnitude NH4 deposition (0–30 kg N ha − 1 ), and soil moisture content in two soils using in-vessel incubations. Emissions from the clay soil peaked ( < 0.002 g N [ g soil ] − 1 min − 1 ) from 85 to 93% WFPS (water filled pore space), increasing to a plateau as remaining mineral-N increased. Peak N2O emissions for the sandy soil were much lower ( < 5 × 10 − 5 μg N [ g soil ] − 1 min − 1 ) and occurred at about 60% WFPS, with an indistinct relationship with increasing resident mineral N due to the low rate of nitrification in that soil. Microbial community and respiration data indicated that the clay soil was dominated by denitrifiers and was more biologically active than the sandy soil. However, the clay soil also had substantial nitrifier communities even under peak emission conditions. A process-based mathematical denitrification model was well suited to the clay soil data where all mineral-N was assumed to be nitrified ( R 2 = 90 % ), providing a substrate for denitrification. This function was not well suited to the sandy soil where nitrification was much less complete. A prototype relationship representing mineral-N pool conversions (NO3− and NH4+) was proposed based on time, pool concentrations, moisture relationships, and soil rate constants (preliminary testing only). A threshold for mineral-N was observed: emission of N2O did not occur from the clay soil for mineral-N <70 mg ( kg of soil ) − 1 , suggesting that soil N availability controls indirect N2O emissions. This laboratory process investigation challenges the IPCC approach which predicts indirect emissions from atmospheric N deposition alone.
Resumo:
Clays could underpin a viable agricultural greenhouse gas (GHG) abatement technology given their affinity for nitrogen and carbon compounds. We provide the first investigation into the efficacy of clays to decrease agricultural nitrogen GHG emissions (i.e., N2O and NH3). Via laboratory experiments using an automated closed-vessel analysis system, we tested the capacity of two clays (vermiculite and bentonite) to decrease N2O and NH3 emissions and organic carbon losses from livestock manures (beef, pig, poultry, and egg layer) incorporated into an agricultural soil. Clay addition levels varied, with a maximum of 1:1 to manure (dry weight). Cumulative gas emissions were modeled using the biological logistic function, with 15 of 16 treatments successfully fitted (P < 0.05) by this model. When assessing all of the manures together, NH3 emissions were lower (×2) at the highest clay addition level compared with no clay addition, but this difference was not significant (P = 0.17). Nitrous oxide emissions were significantly lower (×3; P < 0.05) at the highest clay addition level compared with no clay addition. When assessing manures individually, we observed generally decreasing trends in NH3 and N2O emissions with increasing clay addition, albeit with widely varying statistical significance between manure types. Most of the treatments also showed strong evidence of increased C retention with increasing clay additions, with up to 10 times more carbon retained in treatments containing clay compared with treatments containing no clay. This preliminary assessment of the efficacy of clays to mitigate agricultural GHG emissions indicates strong promise.
Resumo:
This paper examines the possibilities for interfuel substitution in Australia in view of the need to shift towards a cleaner mix of fuels and technologies to meet future energy demand and environmental goals. The translog cost function is estimated for the aggregate economy, the manufacturing sector and its subsectors, and the electricity generation subsector. The advantages of this work over previous literature relating to the Australian case are that it uses relatively recent data, focuses on energy-intensive subsectors and estimates the Morishima elasticities of substitution. The empirical evidence shown herein indicates weak-form substitutability between different energy types, and higher possibilities for substitution at lower levels of aggregation, compared with the aggregate economy. For the electricity generation subsector, which is at the centre of the CO2 emissions problem in Australia, significant but weak substitutability exists between coal and gas when the price of coal changes. A higher substitution possibility exists between coal and oil in this subsector. The evidence for the own- and cross-price elasticities, together with the results for fuel efficiencies, indicates that a large increase in relative prices could be justified to further stimulate the market for low-emission technologies.
Resumo:
Introduction Schizophrenia is a severe mental disorder with multiple psychopathological domains being affected. Several lines of evidence indicate that cognitive impairment serves as the key component of schizophrenia psychopathology. Although there have been a multitude of cognitive studies in schizophrenia, there are many conflicting results. We reasoned that this could be due to individual differences among the patients (i.e. variation in the severity of positive vs. negative symptoms), different task designs, and/or the administration of different antipsychotics. Methods We thus review existing data concentrating on these dimensions, specifically in relation to dopamine function. We focus on most commonly used cognitive domains: learning, working memory, and attention. Results We found that the type of cognitive domain under investigation, medication state and type, and severity of positive and negative symptoms can explain the conflicting results in the literature. Conclusions This review points to future studies investigating individual differences among schizophrenia patients in order to reveal the exact relationship between cognitive function, clinical features, and antipsychotic treatment.
Resumo:
As accountants, we are all familiar with the SUM function, which calculates the sum in a range of numbers. However, there are instances where we might want to sum numbers in a given range based on a specified criteria. In this instance the SUM IF function can achieve this objective.
Resumo:
OBJECTIVE: Lower limb amputation is often associated with a high risk of early post-operative mortality. Mortality rates are also increasingly being put forward as a possible benchmark for surgical performance. The primary aim of this systematic review is to investigate early post-operative mortality following a major lower limb amputation in population/regional based studies, and reported factors that might influence these mortality outcomes. METHODS: Embase, PubMed, Cinahl and Psycinfo were searched for publications in any language on 30 day or in hospital mortality after major lower limb amputation in population/regional based studies. PRISMA guidelines were followed. A self developed checklist was used to assess quality and susceptibility to bias. Summary data were extracted for the percentage of the population who died; pooling of quantitative results was not possible because of methodological differences between studies. RESULTS: Of the 9,082 publications identified, results were included from 21. The percentage of the population undergoing amputation who died within 30 days ranged from 7% to 22%, the in hospital equivalent was 4-20%. Transfemoral amputation and older age were found to have a higher proportion of early post-operative mortality, compared with transtibial and younger age, respectively. Other patient factors or surgical treatment choices related to increased early post-operative mortality varied between studies. CONCLUSIONS: Early post-operative mortality rates vary from 4% to 22%. There are very limited data presented for patient related factors (age, comorbidities) that influence mortality. Even less is known about factors related to surgical treatment choices, being limited to amputation level. More information is needed to allow comparison across studies or for any benchmarking of acceptable mortality rates. Agreement is needed on key factors to be reported.
Resumo:
Background Investigating population changes gives insight into effectiveness and need for prevention and rehabilitation services. Incidence rates of amputation are highly varied, making it difficult to meaningfully compare rates between studies and regions or to compare changes over time. Study Design Historical cohort study of transtibial amputation, knee disarticulation, and transfemoral amputations resulting from vascular disease or infection, with/without diabetes, in 2003-2004, in the three Northern provinces of the Netherlands. Objectives To report the incidence of first transtibial amputation, knee disarticulation, or transfemoral amputation in 2003-2004 and the characteristics of this population, and to compare these outcomes to an earlier reported cohort from 1991 to 1992. Methods Population-based incidence rates were calculated per 100,000 person-years and compared across the two cohorts. Results Incidence of amputation was 8.8 (all age groups) and 23.6 (≥45 years) per 100,000 person-years. This was unchanged from the earlier study of 1991-1992. The relative risk of amputation was 12 times greater for people with diabetes than for people without diabetes. Conclusions Investigation is needed into reasons for the unchanged incidence with respect to the provision of services from a range of disciplines, including vascular surgery, diabetes care, and multidisciplinary foot clinics. Clinical relevance This study shows an unchanged incidence of amputation over time and a high risk of amputation related to diabetes. Given the increased prevalence of diabetes and population aging, both of which present an increase in the population at risk of amputation, finding methods for reducing the rate of amputation is of importance.
Resumo:
Objective To determine mortality rates after a first lower limb amputation and explore the rates for different subpopulations. Methods Retrospective cohort study of all people who underwent a first amputation at or proximal to transtibial level, in an area of 1.7 million people. Analysis with Kaplan-Meier curves and Log Rank tests for univariate associations of psycho-social and health variables. Logistic regression for odds of death at 30-days, 1-year and 5-years. Results 299 people were included. Median time to death was 20.3 months (95%CI: 13.1; 27.5). 30-day mortality = 22%; odds of death 2.3 times higher in people with history of cerebrovascular disease (95%CI: 1.2; 4.7, P = 0.016). 1 year mortality = 44%; odds of death 3.5 times higher for people with renal disease (95%CI: 1.8; 7.0, P < 0.001). 5-years mortality = 77%; odds of death 5.4 times higher for people with renal disease (95%CI: 1.8; 16.0,P = 0.003). Variation in mortality rates was most apparent in different age groups; people 75–84 years having better short term outcomes than those younger and older. Conclusions Mortality rates demonstrated the frailty of this population, with almost one quarter of people dying within 30-days, and almost half at 1 year. People with cerebrovascular had higher odds of death at 30 days, and those with renal disease and 1 and 5 years, respectively.
Resumo:
The recent trend towards minimizing the interconnections in large scale integration (LSI) circuits has led to intensive investigation in the development of ternary circuits and the improvement of their design. The ternary multiplexer is a convenient and useful logic module which can be used as a basic building block in the design of a ternary system. This paper discusses a systematic procedure for the simplification and realization of ternary functions using ternary multiplexers as building blocks. Both single level and multilevel multiplexing techniques are considered. The importance of the design procedure is highlighted by considering two specific applications, namely, the development of ternary adder/subtractor and TCD to ternary converter.
Resumo:
A general method for the development of valid lower bound solutions to uniformly distributed and orthotropically reinforced rectangular concrete slabs obeying normal moment criterion is described. General expressions for moment field have been obtained for nine cases of slabs having all combinations of simply supported and clamped-edge conditions. The lower bound collapse loads have been compared with the upper bound values obtained by the yield line theory. The paper also focuses attention to the need for the development of valid upper bound solutions with the satisfaction of kinematical admissibility and the flow rules associated with the normal moment criterion.
Resumo:
The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.
Resumo:
We explore the semi-classical structure of the Wigner functions ($\Psi $(q, p)) representing bound energy eigenstates $|\psi \rangle $ for systems with f degrees of freedom. If the classical motion is integrable, the classical limit of $\Psi $ is a delta function on the f-dimensional torus to which classical trajectories corresponding to ($|\psi \rangle $) are confined in the 2f-dimensional phase space. In the semi-classical limit of ($\Psi $ ($\hslash $) small but not zero) the delta function softens to a peak of order ($\hslash ^{-\frac{2}{3}f}$) and the torus develops fringes of a characteristic 'Airy' form. Away from the torus, $\Psi $ can have semi-classical singularities that are not delta functions; these are discussed (in full detail when f = 1) using Thom's theory of catastrophes. Brief consideration is given to problems raised when ($\Psi $) is calculated in a representation based on operators derived from angle coordinates and their conjugate momenta. When the classical motion is non-integrable, the phase space is not filled with tori and existing semi-classical methods fail. We conjecture that (a) For a given value of non-integrability parameter ($\epsilon $), the system passes through three semi-classical regimes as ($\hslash $) diminishes. (b) For states ($|\psi \rangle $) associated with regions in phase space filled with irregular trajectories, ($\Psi $) will be a random function confined near that region of the 'energy shell' explored by these trajectories (this region has more than f dimensions). (c) For ($\epsilon \neq $0, $\hslash $) blurs the infinitely fine classical path structure, in contrast to the integrable case ($\epsilon $ = 0, where $\hslash $ )imposes oscillatory quantum detail on a smooth classical path structure.