955 resultados para Mean-value solution
Resumo:
Purpose: To report a very successful outcome obtained with the fitting of a new-generation hybrid contact lens of reverse geometry in a thin cornea with extreme irregularity due to the presence of a central island after unsuccessful myopic excimer laser refractive surgery. Methods: A 32-year-old man attended to our clinic complaining of very poor vision in his right eye after bilateral laser in situ keratomileusis (treatment or surgery) for myopia correction and some additional retreatments afterward. After a comprehensive ocular evaluation, contact lens fitting with a reverse geometry hybrid contact lens (SynergEyes PS, SynergEyes, Carlsbad, CA) was proposed as a solution for this case. Visual, refractive, and ocular aberrometric outcomes with the contact lens were evaluated. Results: Distance visual acuity improved from a prefitting uncorrected value of 20/200 to a postfitting corrected value of 20/16. Prefitting manifest refraction was +6.00 sphere and −3.00 cylinder at 70°, with a corrected distance visual acuity of 20/40. Higher order root mean square for a 5-mm pupil changed from a prefitting value of 1.45 to 0.34 µm with the contact lens. The contact lens wearing was reported as comfortable, and the patient was very satisfied with this solution. Conclusions: The SynergEyes PS contact lens seems to be an excellent option for the visual rehabilitation of corneas with extreme irregularity after myopic excimer laser surgery, minimizing the level of higher order aberrations and providing an excellent visual outcome.
Resumo:
In this paper the authors construct a theory about how the expansion of higher education could be associated with several factors that indicate a decline in the quality of degrees. They assume that the expansion of tertiary education takes place through three channels, and show how these channels are likely to reduce average study time, lower academic requirements and average wages, and inflate grades. First, universities have an incentive to increase their student body through public and private funding schemes beyond a level at which they can keep their academic requirements high. Second, due to skill-biased technological change, employers have an incentive to recruit staff with a higher education degree. Third, students have an incentive to acquire a college degree due to employers’ preferences for such qualifications; the university application procedures; and through the growing social value placed on education. The authors develop a parsimonious dynamic model in which a student, a college and an employer repeatedly make decisions about requirement levels, performance and wage levels. Their model shows that if i) universities have the incentive to decrease entrance requirements, ii) employers are more likely to employ staff with a higher education degree and iii) all types of students enrol in colleges, the final grade will not necessarily induce weaker students to study more to catch up with more able students. In order to re-establish a quality-guarantee mechanism, entrance requirements should be set at a higher level.
Resumo:
"UILU-ENG 80 1712."
Resumo:
The Boussinesq equation appears as the zeroth-order term in the shallow water flow expansion of the non-linear equation describing the flow of fluid in an unconfined aquifer. One-dimensional models based on the Boussinesq equation have been used to analyse tide-induced water table fluctuations in coastal aquifers. Previous analytical solutions for a sloping beach are based on the perturbation parameter, epsilon(N) = alphaepsilon cot beta (in which beta is the beach slope, alpha is the amplitude parameter and epsilon is the shallow water parameter) and are limited to tan(-1) (alphaepsilon) much less than beta less than or equal to pi/2. In this paper, a new higher-order solution to the non-linear boundary value problem is derived. The results demonstrate the significant influence of the higher-order components and beach slope on the water table fluctuations. The relative difference between the linear solution and the present solution increases as 6 and a increase, and reaches 7% of the linear solution. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
New high-precision niobium (Nb) and tantalum (Ta) concentration data are presented for early Archaean metabasalts, metabasaltic komatiites and their erosion products (mafic metapelites) from SW Greenland and the Acasta gneiss complex, Canada. Individual datasets consistently show sub-chondritic Nb/Ta ratios averaging 15.1+/-11.6. This finding is discussed with regard to two competing models for the solution of the Nb-deficit that characterises the accessible Earth. Firstly, we test whether Nb could have sequestered into the core due to its slightly siderophile (or chalcophile) character under very reducing conditions, as recently proposed from experimental evidence. We demonstrate that troilite inclusions of the Canyon Diablo iron meteorite have Nb and V concentrations in excess of typical chondrites but that the metal phase of the Grant, Toluca and Canyon Diablo iron meteorites do not have significant concentrations of these lithophile elements. We find that if the entire accessible Earth Nb-deficit were explained by Nb in the core, only ca. 17% of the mantle could be depleted and that by 3.7 Ga, continental crust would have already achieved ca. 50% of its present mass. Nb/Ta systematics of late Archaean metabasalts compiled from the literature would further require that by 2.5 Ga, 90% of the present mass of continental crust was already in existence. As an alternative to this explanation, we propose that the average Nb/Ta ratio (15.1+/-11.6) of Earth's oldest mafic rocks is a valid approximation for bulk silicate Earth. This would require that ca. 13% of the terrestrial Nb resided in the Ta-free core. Since the partitioning of Nb between silicate and metal melts depends largely on oxygen fugacity and pressure, this finding could mean that metal/silicate segregation did not occur at the base of a deep magma ocean or that the early mantle was slightly less reducing than generally assumed. A bulk silicate Earth Nb/Ta ratio of 15.1 allows for depletion of up to 40% of the total mantle. This could indicate that in addition to the upper mantle, a portion of the lower mantle is depleted also, or if only the upper mantle were depleted, an additional hidden high Nb/Ta reservoir must exist. Comparison of Nb/Ta systematics between early and late Archaean metabasalts supports the latter idea and indicates deeply subducted high Nb/Ta eclogite slabs could reside in the mantle transition zone or the lower mantle. Accumulation of such slabs appears to have commenced between 2.5 and 2.0 Ga. Regardless of these complexities of terrestrial Nb/Ta systematics, it is shown that the depleted mantle Nb/Th ratio is a very robust proxy for the amount of extracted continental crust, because the temporal evolution of this ratio is dominated by Th-loss to the continents and not Nb-retention in the mantle. We present a new parameterisation of the continental crust volume versus age curve that specifically explores the possibility of lithophile element loss to the core and storage of eclogite slabs in the transition zone. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
0We study the exact solution for a two-mode model describing coherent coupling between atomic and molecular Bose-Einstein condensates (BEC), in the context of the Bethe ansatz. By combining an asymptotic and numerical analysis, we identify the scaling behaviour of the model and determine the zero temperature expectation value for the coherence and average atomic occupation. The threshold coupling for production of the molecular BEC is identified as the point at which the energy gap is minimum. Our numerical results indicate a parity effect for the energy gap between ground and first excited state depending on whether the total atomic number is odd or even. The numerical calculations for the quantum dynamics reveals a smooth transition from the atomic to the molecular BEC.
Resumo:
BACKGROUND: Recent studies have demonstrated that exercise capacity is an independent predictor of mortality in women. Normative values of exercise capacity for age in women have not been well established. Our objectives were to construct a nomogram to permit determination of predicted exercise capacity for age in women and to assess the predictive value of the nomogram with respect to survival. METHODS: A total of 5721 asymptomatic women underwent a symptom-limited, maximal stress test. Exercise capacity was measured in metabolic equivalents (MET). Linear regression was used to estimate the mean MET achieved for age. A nomogram was established to allow the percentage of predicted exercise capacity to be estimated on the basis of age and the exercise capacity achieved. The nomogram was then used to determine the percentage of predicted exercise capacity for both the original cohort and a referral population of 4471 women with cardiovascular symptoms who underwent a symptom-limited stress test. Survival data were obtained for both cohorts, and Cox survival analysis was used to estimate the rates of death from any cause and from cardiac causes in each group. RESULTS: The linear regression equation for predicted exercise capacity (in MET) on the basis of age in the cohort of asymptomatic women was as follows: predicted MET = 14.7 - (0.13 x age). The risk of death among asymptomatic women whose exercise capacity was less than 85 percent of the predicted value for age was twice that among women whose exercise capacity was at least 85 percent of the age-predicted value (P<0.001). Results were similar in the cohort of symptomatic women. CONCLUSIONS: We have established a nomogram for predicted exercise capacity on the basis of age that is predictive of survival among both asymptomatic and symptomatic women. These findings could be incorporated into the interpretation of exercise stress tests, providing additional prognostic information for risk stratification.
Resumo:
Based on a newly established sequencing strategy featured by its efficiency, simplicity, and easy manipulation, the sequences of four novel cyclotides (macrocyclic knotted proteins) isolated from an Australian plant Viola hederaceae were determined. The three-dimensional solution structure of V. hederaceae leaf cyclotide-1 ( vhl-1), a leaf-specific expressed 31-residue cyclotide, has been determined using two-dimensional H-1 NMR spectroscopy. vhl-1 adopts a compact and well defined structure including a distorted triple-stranded β- sheet, a short 310 helical segment and several turns. It is stabilized by three disulfide bonds, which, together with backbone segments, form a cyclic cystine knot motif. The three-disulfide bonds are almost completely buried into the protein core, and the six cysteines contribute only 3.8% to the molecular surface. A pH titration experiment revealed that the folding of vhl-1 shows little pH dependence and allowed the pK(a) of 3.0 for Glu(3) and &SIM; 5.0 for Glu(14) to be determined. Met(7) was found to be oxidized in the native form, consistent with the fact that its side chain protrudes into the solvent, occupying 7.5% of the molecular surface. vhl-1 shows anti-HIV activity with an EC50 value of 0.87 μ m.
Resumo:
Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.
Resumo:
We investigate the structure of the positive solution set for nonlinear three-point boundary value problems of the form u('') + h(t) f(u) = 0, u(0) = 0, u(1) = lambdau(eta), where eta epsilon (0, 1) is given lambda epsilon (0, 1/n) is a parameter, f epsilon C ([0, infinity), [0, infinity)) satisfies f (s) > 0 for s > 0, and h epsilon C([0, 1], [0, infinity)) is not identically zero on any subinterval of [0, 1]. Our main results demonstrate the existence of continua of positive solutions of the above problem. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The recent deregulation in electricity markets worldwide has heightened the importance of risk management in energy markets. Assessing Value-at-Risk (VaR) in electricity markets is arguably more difficult than in traditional financial markets because the distinctive features of the former result in a highly unusual distribution of returns-electricity returns are highly volatile, display seasonalities in both their mean and volatility, exhibit leverage effects and clustering in volatility, and feature extreme levels of skewness and kurtosis. With electricity applications in mind, this paper proposes a model that accommodates autoregression and weekly seasonals in both the conditional mean and conditional volatility of returns, as well as leverage effects via an EGARCH specification. In addition, extreme value theory (EVT) is adopted to explicitly model the tails of the return distribution. Compared to a number of other parametric models and simple historical simulation based approaches, the proposed EVT-based model performs well in forecasting out-of-sample VaR. In addition, statistical tests show that the proposed model provides appropriate interval coverage in both unconditional and, more importantly, conditional contexts. Overall, the results are encouraging in suggesting that the proposed EVT-based model is a useful technique in forecasting VaR in electricity markets. (c) 2005 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
Resumo:
Background: Oral itraconazole (ITRA) is used for the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF) because of its antifungal activity against Aspergillus species. ITRA has an active hydroxy-metabolite (OH-ITRA) which has similar antifungal activity. ITRA is a highly lipophilic drug which is available in two different oral formulations, a capsule and an oral solution. It is reported that the oral solution has a 60% higher relative bioavailability. The influence of altered gastric physiology associated with CF on the pharmacokinetics (PK) of ITRA and its metabolite has not been previously evaluated. Objectives: 1) To estimate the population (pop) PK parameters for ITRA and its active metabolite OH-ITRA including relative bioavailability of the parent after administration of the parent by both capsule and solution and 2) to assess the performance of the optimal design. Methods: The study was a cross-over design in which 30 patients received the capsule on the first occasion and 3 days later the solution formulation. The design was constrained to have a maximum of 4 blood samples per occasion for estimation of the popPK of both ITRA and OH-ITRA. The sampling times for the population model were optimized previously using POPT v.2.0.[1] POPT is a series of applications that run under MATLAB and provide an evaluation of the information matrix for a nonlinear mixed effects model given a particular design. In addition it can be used to optimize the design based on evaluation of the determinant of the information matrix. The model details for the design were based on prior information obtained from the literature, which suggested that ITRA may have either linear or non-linear elimination. The optimal sampling times were evaluated to provide information for both competing models for the parent and metabolite and for both capsule and solution simultaneously. Blood samples were assayed by validated HPLC.[2] PopPK modelling was performed using FOCE with interaction under NONMEM, version 5 (level 1.1; GloboMax LLC, Hanover, MD, USA). The PK of ITRA and OH‑ITRA was modelled simultaneously using ADVAN 5. Subsequently three methods were assessed for modelling concentrations less than the LOD (limit of detection). These methods (corresponding to methods 5, 6 & 4 from Beal[3], respectively) were (a) where all values less than LOD were assigned to half of LOD, (b) where the closest missing value that is less than LOD was assigned to half the LOD and all previous (if during absorption) or subsequent (if during elimination) missing samples were deleted, and (c) where the contribution of the expectation of each missing concentration to the likelihood is estimated. The LOD was 0.04 mg/L. The final model evaluation was performed via bootstrap with re-sampling and a visual predictive check. The optimal design and the sampling windows of the study were evaluated for execution errors and for agreement between the observed and predicted standard errors. Dosing regimens were simulated for the capsules and the oral solution to assess their ability to achieve ITRA target trough concentration (Cmin,ss of 0.5-2 mg/L) or a combined Cmin,ss for ITRA and OH-ITRA above 1.5mg/L. Results and Discussion: A total of 241 blood samples were collected and analysed, 94% of them were taken within the defined optimal sampling windows, of which 31% where taken within 5 min of the exact optimal times. Forty six per cent of the ITRA values and 28% of the OH-ITRA values were below LOD. The entire profile after administration of the capsule for five patients was below LOD and therefore the data from this occasion was omitted from estimation. A 2-compartment model with 1st order absorption and elimination best described ITRA PK, with 1st order metabolism of the parent to OH-ITRA. For ITRA the clearance (ClItra/F) was 31.5 L/h; apparent volumes of central and peripheral compartments were 56.7 L and 2090 L, respectively. Absorption rate constants for capsule (kacap) and solution (kasol) were 0.0315 h-1 and 0.125 h-1, respectively. Comparative bioavailability of the capsule was 0.82. There was no evidence of nonlinearity in the popPK of ITRA. No screened covariate significantly improved the fit to the data. The results of the parameter estimates from the final model were comparable between the different methods for accounting for missing data, (M4,5,6)[3] and provided similar parameter estimates. The prospective application of an optimal design was found to be successful. Due to the sampling windows, most of the samples could be collected within the daily hospital routine, but still at times that were near optimal for estimating the popPK parameters. The final model was one of the potential competing models considered in the original design. The asymptotic standard errors provided by NONMEM for the final model and empirical values from bootstrap were similar in magnitude to those predicted from the Fisher Information matrix associated with the D-optimal design. Simulations from the final model showed that the current dosing regimen of 200 mg twice daily (bd) would provide a target Cmin,ss (0.5-2 mg/L) for only 35% of patients when administered as the solution and 31% when administered as capsules. The optimal dosing schedule was 500mg bd for both formulations. The target success for this dosing regimen was 87% for the solution with an NNT=4 compared to capsules. This means, for every 4 patients treated with the solution one additional patient will achieve a target success compared to capsule but at an additional cost of AUD $220 per day. The therapeutic target however is still doubtful and potential risks of these dosing schedules need to be assessed on an individual basis. Conclusion: A model was developed which described the popPK of ITRA and its main active metabolite OH-ITRA in adult CF after administration of both capsule and solution. The relative bioavailability of ITRA from the capsule was 82% that of the solution, but considerably more variable. To incorporate missing data, using the simple Beal method 5 (using half LOD for all samples below LOD) provided comparable results to the more complex but theoretically better Beal method 4 (integration method). The optimal sparse design performed well for estimation of model parameters and provided a good fit to the data.
Resumo:
Purpose - Managers at the company attempt to implement a knowledge management information system in an attempt to avoid loss of expertise while improving control and efficiency. The paper seeks to explore the implications of the technological solution to employees within the company. Design/methodology/approach - The paper reports qualitative research conducted in a single organization. Evidence is presented in the form of interview extracts. Findings - The case section of the paper presents the accounts of organizational participants. The accounts reveal the workers' reactions to the technology-based system and something of their strategies of resistance to the system. These accounts also provide glimpses of the identity construction engaged in by these knowledge workers. The setting for the research is in a knowledge-intensive primary industry. Research was conducted through observation and interviews. Research limitations/implications - The issues identified are explored in a single case-study setting. Future research could look at the relevance of the findings to other settings. Practical implications - The case evidence presented indicates some of the complexity of implementation of information systems in organizations. This could certainly be seen as more evidence of the uncertainty associated with organizational change and of the need for managers not to expect an easy adoption of intrusive IT solutions. Originality/value - This paper adds empirical insight to a largely conceptual literature. © Emerald Group Publishing Limited.
Resumo:
Purpose – The purpose of this paper is to demonstrate the need for an improved understanding of consumer value for online grocery purchases and to propose the notion of “integrated service solution” packages as a strategy for growing and successfully sustaining the channel to guide both marketing strategy and policy. Design/methodology/approach – This paper integrates and synthesises research from retailing, consumer behaviour and service quality literatures in order to develop a conceptual framework for understanding the value of e-grocery shopping to aid practitioners to address the critical needs, expectations and concerns of consumers for the development of grocery shopping within the online environment. Findings – This paper offers an alternative approach to allow e-grocery to become a mainstream retail channel in its own right and not to compete with the in-store offerings. The research demonstrates the need for a progressive approach that follows contemporary consumer needs and habits at the household level. The conjecture is that shopping for fast-moving consumer goods follows a learning path that needs to be replicated in the online context. Moreover, it is suggested that consumer resistance to the adoption of the new channel should be addressed not only from a technological perspective but also from the social aspects of online shopping. Originality/value – The research provides a practical framework for both retailers and policy makers on how the “next generation” of online services can be developed using a “bottom up” consumer perspective. This paper also advocates a non-technological bias to e-grocery retailing strategy.